A selection of the most important recent news, articles, and papers about AI.
General News, Articles, and Analyses
CIOs turn to NIST to tackle generative AI’s many risks | CIO Dive
https://www.ciodive.com/news/cio-generative-ai-risk-mitigation-strategy-NIST-framework/728257/
Author: Lindsey Wilkinson
(Monday, September 30, 2024) “Discover’s risk reduction strategy closely follows the guidance laid out by the National Institute of Standards and Technology, which released a draft of its generative AI risk management framework in July.”
China to roll out cybersecurity rules covering generative AI – Nikkei Asia
NVIDIA AI Summit DC: Industry Leaders Gather to Showcase AI’s Real-World Impact
https://blogs.nvidia.com/blog/ai-summit-dc-2024/
Author: Claudia Cook
(Tuesday, October 1, 2024) “Washington, D.C., is where possibility has always met policy, and AI presents unparalleled opportunities for tackling global challenges. NVIDIA’s AI Summit in Washington, set for October 7-9, will gather industry leaders to explore how AI addresses some of society’s most significant challenges.”
AI Chipsets
Huawei guns for Nvidia market share in China — Ascend 910C GPU customer sampling begins | Tom’s Hardware
(Sunday, September 29, 2024) “Citing sources familiar with the matter, the report says “large Chinese server companies… and internet firms” have received samples of the Ascend 910C. Although this new GPU is described as an upgraded Ascend 910B, it’s been unclear what exactly the chip is made of, ever since a report in August revealed its existence. Interestingly, the 910C may be able to outperform Nvidia’s upcoming Blackwell-based B20 according to a prediction made by SemiAnalysis’s Dylan Patel.”
Coding and Software Engineering
[2409.18661] Not the Silver Bullet: LLM-enhanced Programming Error Messages are Ineffective in Practice
https://arxiv.org/abs/2409.18661
Authors: Santos, Eddie Antonio and Becker, Brett A.
(Friday, September 27, 2024) “The sudden emergence of large language models (LLMs) such as ChatGPT has had a disruptive impact throughout the computing education community. LLMs have been shown to excel at producing correct code to CS1 and CS2 problems, and can even act as friendly assistants to students learning how to code. Recent work shows that LLMs demonstrate unequivocally superior results in being able to explain and resolve compiler error messages – for decades, one of the most frustrating parts of learning how to code. However, LLM-generated error message explanations have only been assessed by expert programmers in artificial conditions. This work sought to understand how novice programmers resolve programming error messages (PEMs) in a more realistic scenario. We ran a within-subjects study with $n$ = 106 participants in which students were tasked to fix six buggy C programs. For each program, participants were randomly assigned to fix the problem using either a stock compiler error message, an expert-handwritten error message, or an error message explanation generated by GPT-4. Despite promising evidence on synthetic benchmarks, we found that GPT-4 generated error messages outperformed conventional compiler error messages in only 1 of the 6 tasks, measured by students’ time-to-fix each problem. Handwritten explanations still outperform LLM and conventional error messages, both on objective and subjective measures.”
Technical Papers, Articles, and Preprints
[2409.18335] A Fairness-Driven Method for Learning Human-Compatible Negotiation Strategies
https://arxiv.org/abs/2409.18335
[2409.18475] Data Analysis in the Era of Generative AI
https://arxiv.org/abs/2409.18475