A selection of the most important recent news, articles, and papers about AI Chipsets.
General News, Articles, and Analyses
1. AI Reinvents Chip Design – Communications of the ACM
https://cacm.acm.org/news/ai-reinvents-chip-design/
Author: Samuel Greengard
(Thursday, August 22, 2024) “Chipmakers are turning to artificial intelligence to improve conceptional design, transistor models, simulations and analysis, verification and testing, and more.”
2. NVIDIA to Present Innovations at Hot Chips That Boost Data Center Performance and Energy Efficiency | NVIDIA Blog
https://blogs.nvidia.com/blog/hot-chips-2024/
Author: Dave Salvator
(Friday, August 23, 2024) “Across four talks at the conference, NVIDIA engineers will share details about the NVIDIA Blackwell platform, new research on liquid cooling and AI agents to support chip design.”
3. Imagination Raises $100 Million Investment To Take On Edge AI
https://www.eetimes.com/imagination-raises-100-million-investment-to-take-on-edge-ai/
Author: Sally Ward-Foxton
(Friday, August 23, 2024) “British IP provider Imagination Technologies has raised investment in the form of a $100 million convertible term loan from Fortress Investment Group. Imagination will use the funds to support the development of its technologies for graphics, compute and AI at the edge, as well as fund its ambitious growth objectives.”
4. Introducing the IBM Spyre AI accelerator chip – IBM Research
https://research.ibm.com/blog/spyre-for-z
(Monday, August 26, 2024) “At Hot Chips 2024, IBM previewed its new Spyre accelerator chip for IBM Z, designed in collaboration with IBM Research, to scale the enterprise AI workloads of tomorrow.”
5. New IBM Processor Innovations To Accelerate AI on Next-Generation IBM Z Mainframe Systems – Aug 26, 2024
https://newsroom.ibm.com/ai-on-z
(Monday, August 26, 2024) “IBM (NYSE: IBM) revealed architecture details for the upcoming IBM Telum® II Processor and IBM Spyre™ Accelerator at Hot Chips 2024. The new technologies are designed to significantly scale processing capacity across next generation IBM Z mainframe systems helping accelerate the use of traditional AI models and Large Language AI models in tandem through a new ensemble method of AI.”
6. AMD’s AI Plan: The Nvidia Killer or a Wasted Effort?
https://www.hpcwire.com/2024/08/26/amds-ai-plan-the-nvidia-killer-or-a-wasted-effort/
Author: Doug Eadline
(Monday, August 26, 2024) “An AMD call to discuss its $4.9 billion acquisition of ZT Systems provided an inside look into how Lisa Su is building her AI empire. She laid down an AMD AI landscape that is polar opposite to Nvidia’s proprietary approach.”
7. Cerebras Launches the World’s Fastest AI Inference
(Tuesday, August 27, 2024) “Today, Cerebras Systems, the pioneer in high performance AI compute, announced Cerebras Inference, the fastest AI inference solution in the world. Delivering 1,800 tokens per second for Llama 3.1 8B and 450 tokens per second for Llama 3.1 70B, Cerebras Inference is 20 times faster than NVIDIAGPU-based solutions in hyperscale clouds. Starting at just 10c per million tokens, Cerebras Inference is priced at a fraction of GPU solutions, providing 100x higher price-performance for AI workloads.”
8. New Electronics – FuriosaAI unveils new AI inference chip
https://www.newelectronics.co.uk/content/news/furiosaai-unveils-new-ai-inference-chip/
Author: Neil Tyler
(Tuesday, August 27, 2024) “FuriosaAI, an emerging AI semiconductor company, has unveiled the RNGD AI accelerator, at Hot Chips 2024.”