← Spaces

Research Space / AI

All • With links

Analysis of AI Scaling. Leaked news about OpenAI and Google researchers suggested that new models are not better than the latest versions, indicating potential issues with the AI Scaling Law.

insight • 6 hours ago • Via Artificial Intelligence Made Simple •

Expert Advice on AI Careers. Learn from smart people but think independently; build a broad base of technical and common-sense skills.

insight • 6 hours ago • Via Artificial Intelligence Made Simple •

Recommendation for Brevity. More and shorter posts might be helpful, since content takes work and time to swallow and digest.

recommendation • 6 hours ago • Via Artificial Intelligence Made Simple •

Content Length Feedback. Several comments have been received regarding articles being very long and detailed.

data point • 6 hours ago • Via Artificial Intelligence Made Simple •

Energy Bottleneck. Among the many bottlenecks for AI data centers, energy might be the most important and difficult to address.

insight • 6 hours ago • Via Artificial Intelligence Made Simple •

Hallucinations in LLMs. Large language models are susceptible to hallucinations, and distinguishing types of hallucinations is crucial for mitigation.

insight • 6 hours ago • Via Artificial Intelligence Made Simple • arxiv.org

Nuclear Energy Agreement. Google signed the world's first corporate agreement to purchase nuclear energy from multiple small modular reactors to be developed by Kairos Power.

insight • 6 hours ago • Via Artificial Intelligence Made Simple • blog.google

Community Engagement. We started an AI Made Simple Subreddit to foster community interaction.

insight • 6 hours ago • Via Artificial Intelligence Made Simple • www.reddit.com

Support Model. We follow a 'pay what you can' model, which allows you to support within your means.

data point • 6 hours ago • Via Artificial Intelligence Made Simple •

Content Variety. The focus will be on AI and Tech, but the ideas might range from business, philosophy, ethics, and much more.

insight • 6 hours ago • Via Artificial Intelligence Made Simple •

Gemini Model Success. Google's latest AI model, Gemini-Exp-1114, has topped the Imarena Chatbot Arena leaderboard, surpassing OpenAI's GPT-4o and o1-preview reasoning model.

data point • 9 hours ago • Via Last Week in AI • www.tomsguide.com

Investment in xAI. Elon Musk's artificial intelligence company, xAI, is reportedly raising up to $6 billion at a $50 billion valuation to acquire 100,000 Nvidia chips for a new supercomputer in Memphis.

data point • 9 hours ago • Via Last Week in AI • www.cnbc.com

AI Music Advancements. Suno V4 introduces significant advancements in AI music generation, including improved audio quality, dynamic song structures, and innovative features like the ReMi lyrics assistant.

insight • 9 hours ago • Via Last Week in AI • 9meters.com

Figure 02 Performance. Figure AI's humanoid robot, Figure 02, has achieved a 400% increase in speed and a sevenfold improvement in success rate on BMW's production line.

data point • 9 hours ago • Via Last Week in AI • analyticsindiamag.com

Nvidia Chip Challenges. Nvidia's Blackwell GPUs, initially delayed due to overheating issues in server racks, may have had the problem resolved, but the challenge of managing energy and heat in AI data centers remains significant.

insight • 9 hours ago • Via Last Week in AI • www.pcmag.com

Denmark AI Framework. Denmark's new framework, supported by Microsoft, provides guidelines for EU member states to responsibly implement AI in compliance with the EU's AI Act.

insight • 9 hours ago • Via Last Week in AI • www.cnbc.com

OpenAI Policy Blueprint. OpenAI's policy blueprint envisions a significant role for the U.S. government in AI development, emphasizing infrastructure, energy systems, and economic zones to boost productivity and counter China's influence.

insight • 9 hours ago • Via Last Week in AI • fedscoop.com

Job Market Changes. Generative AI tools like ChatGPT are rapidly reducing job opportunities in automation-prone fields, but those who adapt by acquiring AI skills may find new opportunities in the evolving job market.

insight • 9 hours ago • Via Last Week in AI • www.thealgorithmicbridge.com

AI in Healthcare. ChatGPT-4 outperformed doctors in diagnosing medical conditions, highlighting both the chatbot's superior accuracy and the potential overconfidence of doctors in their own diagnoses.

data point • 9 hours ago • Via Last Week in AI • www.nytimes.com

Productivity with Windsurf. Windsurf Editor by Codeium integrates AI collaboration and autonomous task-handling to create a seamless development experience, enhancing productivity through its innovative Cascade feature.

insight • 9 hours ago • Via Last Week in AI • www.maginative.com

Mistral Competition. French startup Mistral has launched Pixtral Large, a 124-billion-parameter model, and upgraded its chatbot, Le Chat, to compete directly with OpenAI's ChatGPT.

data point • 9 hours ago • Via Last Week in AI • venturebeat.com

.

• 1 day ago • Via Simon Willison on Mastodon •

. I believe that Agents will lead to the next major breakthrough in AI.

insight • 1 day ago • Via Artificial Intelligence Made Simple •

. Gemini is tediously overengineered- It’s trying to balance MoE with mid AND post-generation alignment.

insight • 1 day ago • Via Artificial Intelligence Made Simple •

. If your primary purpose with the LLM is to have it be the engine that ensures that everything works, then 4o seems like the best choice.

insight • 1 day ago • Via Artificial Intelligence Made Simple •

. I think they’re a very sketchy company, and they consistently hide important information.

insight • 1 day ago • Via Artificial Intelligence Made Simple •

. I’d strongly recommend doing your own research.

recommendation • 1 day ago • Via Artificial Intelligence Made Simple •

. Precision improves the reproducibility of your experiments, which makes your system more predictable.

insight • 1 day ago • Via Artificial Intelligence Made Simple •

. I’ve been bullish on Gemini/Google AI for a while now, but they have found new ways to constantly let me down.

insight • 1 day ago • Via Artificial Intelligence Made Simple •

. As an orchestrator, I have nothing positive to say about o1.

data point • 1 day ago • Via Artificial Intelligence Made Simple •

. Claude is very good at decompositions but lacks stability and has an annoying tendency not to follow my instructions.

data point • 1 day ago • Via Artificial Intelligence Made Simple •

. GPT 4o is by far my favorite LLM for the orchestration layer.

data point • 1 day ago • Via Artificial Intelligence Made Simple •

. The 'LLM as an orchestrator' is my favorite framework/thinking pattern in building Agentic Systems.

insight • 1 day ago • Via Artificial Intelligence Made Simple •

AI Poetry Study. A new AI study claims that ChatGPT can write poetry that is 'indistinguishable' from William Shakespeare.

data point • 1 day ago • Via Gary Marcus on AI •

Skeptical View. Gary Marcus hopes he has taught you by now to never trust the hype.

insight • 1 day ago • Via Gary Marcus on AI •

Critique Link. Davis's full critique can be found here.

data point • 1 day ago • Via Gary Marcus on AI • cs.nyu.edu

Appendix Highlight. Stay for the Appendix, entitled 'Particularly terrible lines'.

insight • 1 day ago • Via Gary Marcus on AI •

AI Imitation. The AI poems seem like imitations that might have been produced by a supremely untalented poet who had never read any of the poems he was tasked with imitating.

insight • 1 day ago • Via Gary Marcus on AI •

Davis's Critique. Ernest Davis had a careful look at the study, methods, materials, etc, and not just the headline.

insight • 1 day ago • Via Gary Marcus on AI • cs.nyu.edu

.

• 2 days ago • Via Simon Willison on Mastodon •

Saudi AI Initiative. Saudi Arabia plans a $100 billion AI initiative aiming to rival UAE's tech hub, highlighting the region's escalating AI investments.

insight • 2 days ago • Via Last Week in AI • www.bloomberg.com

Anthropic Collaboration. Anthropic collaborates with Palantir and AWS to integrate CLAWD into defense environments, marking a significant policy shift for the company.

insight • 2 days ago • Via Last Week in AI • techcrunch.com

US Sanctions Challenge. U.S. penalties on GlobalFoundries for violating sanctions against SMIC underline ongoing challenges in enforcing AI-chip export controls.

insight • 2 days ago • Via Last Week in AI • www.reuters.com

OpenAI's Acquisition. OpenAI's acquisition of chat.com and internal shifts signal significant strategy pivots and challenges with model scaling and security.

insight • 2 days ago • Via Last Week in AI • techcrunch.com

Systematic Issues. The scale-first mentality is a systemic issue, and needs to be addressed at the root.

insight • 3 days ago • Via Artificial Intelligence Made Simple •

Research Culture Influence. Scaling reliably improves benchmarks, which makes it a very good match for an academic/research environment where publications are a must.

insight • 3 days ago • Via Artificial Intelligence Made Simple •

Performance Metrics Issue. Emergent abilities are created by the researcher’s choice of metrics, not fundamental changes in model family behavior on specific tasks with scale.

insight • 3 days ago • Via Artificial Intelligence Made Simple •

Funding Justifications. Big scaling projects are easy to explain to funders.

insight • 3 days ago • Via Artificial Intelligence Made Simple •

Market Share Strategy. The hope is that scaling now will build market share for the future (and prevent competitors from taking it in the future).

insight • 3 days ago • Via Artificial Intelligence Made Simple •

Corporate Research Benefits. Scaling is a very attractive option for corporate research b/c it is everything that middle management dreams about: reliable, easy to account for, non-disruptive, and impersonal.

insight • 3 days ago • Via Artificial Intelligence Made Simple •

Public Interest Concerns. The accompanying research dominance should be a worry for policy-makers around the world because it means that public interest alternatives for important AI tools may become increasingly scarce.

insight • 3 days ago • Via Artificial Intelligence Made Simple •

Scaling Dominance. Scaling wins because it perfectly fits how big organizations (especially corporate research) operate.

insight • 3 days ago • Via Artificial Intelligence Made Simple •

Research Incentives. The incentive/compensation structure for Researchers drives them to pursue scaling.

insight • 3 days ago • Via Artificial Intelligence Made Simple •

Scaling Challenges. Leading LLM Labs have been allegedly struggling with going forward- with reports on both OpenAI and Google allegedly struggling to push their models GPT and Gemini to the next level.

insight • 3 days ago • Via Artificial Intelligence Made Simple •

.

• 4 days ago • Via Simon Willison on Mastodon •

.

• 6 days ago • Via Simon Willison on Mastodon •

.

• 6 days ago • Via Simon Willison on Mastodon •

MLOps Efficiency. VC firms that increased their hiring of women partners by just 10% saw an average increase of 1.5% in overall fund returns and gained 9.7% more profitable assets.

insight • 6 days ago • Via Artificial Intelligence Made Simple • www.bloomberg.com

Women in VC. Only 15% of private equity institutional partners and managing directors are women.

data point • 6 days ago • Via Artificial Intelligence Made Simple • strategex.com

Memory Personalization. Memory/personalization of LLMs to help them generate more customized things for the user, increasing the friction for switching.

insight • 6 days ago • Via Artificial Intelligence Made Simple •

Female Entrepreneur Efficacy. Female entrepreneurs have been shown to deliver more than double the revenue per dollar invested compared to their male counterparts.

insight • 6 days ago • Via Artificial Intelligence Made Simple • www.bloomberg.com

Ventures 93% Male. 98% of all venture capital dollars flow into male-founded startups.

data point • 6 days ago • Via Artificial Intelligence Made Simple • www.weforum.org

AI Ethics Discussions. Devansh talks about his experiences advocating for safer social platforms, his controversial takes on ‘morally aligned’ LLMs, and the underlying ethical issues in tech that often go unnoticed.

insight • 6 days ago • Via Artificial Intelligence Made Simple •

High-Quality Education. We follow a “pay what you can” model, which allows you to support within your means, and support my mission of providing high-quality technical education to everyone for less than the price of a cup of coffee.

recommendation • 6 days ago • Via Artificial Intelligence Made Simple •

Reading Recommendations. I figured I’d start sharing whatever AI Papers/Publications, interesting books, videos, etc I came across each week.

insight • 6 days ago • Via Artificial Intelligence Made Simple •

Community Engagement. Before we begin, our cult has established itself in 190 countries.

data point • 6 days ago • Via Artificial Intelligence Made Simple •

Research Support. I put a lot of effort into creating work that is informative, useful, and independent from undue influence.

data point • 6 days ago • Via Artificial Intelligence Made Simple •

Language Models Survey. To address the scaling challenges, we introduce Mixture-of-Transformers (MoT), a sparse multi-modal transformer architecture that significantly reduces pretraining computational costs.

data point • 6 days ago • Via Artificial Intelligence Made Simple • arxiv.org

Podcast Insight. We talked a bunch of things, mainly related to ethics, morally aligned LLMs, and what needs to be done to ensure that tech works for us.

insight • 6 days ago • Via Artificial Intelligence Made Simple •

.

• 1 week ago • Via Simon Willison on Mastodon •

.

• 1 week ago • Via Simon Willison on Mastodon •

.

• 1 week ago • Via Simon Willison on Mastodon •

AlphaFold 3 Capabilities. AlphaFold 3 is a major upgrade from its predecessor, capable of modeling complex interactions between proteins, DNA, RNA, and small molecules, which are crucial for understanding drug discovery and disease treatment.

insight • 1 week ago • Via Last Week in AI •

Judge Dismisses Lawsuit. A US judge dismissed a copyright lawsuit against OpenAI, ruling that the plaintiffs failed to demonstrate that their articles were copyrighted or that ChatGPT's responses would likely plagiarize their content.

insight • 1 week ago • Via Last Week in AI • www.theregister.com

FrontierMath Benchmark. FrontierMath is a new benchmark designed to evaluate AI's mathematical reasoning by presenting research-level problems that current models struggle to solve, highlighting the gap between AI and human mathematicians.

insight • 1 week ago • Via Last Week in AI • www.marktechpost.com

OpenAI's Leadership Changes. Lilian Weng's departure from OpenAI highlights ongoing concerns about the company's commitment to AI safety amid a wave of exits by key researchers and executives.

insight • 1 week ago • Via Last Week in AI • techcrunch.com

Nvidia's Market Position. Nvidia's rise to become the world's largest company highlights the significant impact and dominance of artificial intelligence in the financial markets.

insight • 1 week ago • Via Last Week in AI • www.bloomberg.com

Waymo Expansion News. Waymo's robotaxi service, now available in Los Angeles, has rapidly expanded due to significant funding and partnerships, offering over 150,000 weekly rides across multiple cities.

insight • 1 week ago • Via Last Week in AI • www.cnbc.com

Trump's AI Policy Shift. Donald Trump's victory in the 2024 election has significant implications for the future of artificial intelligence (AI) in the United States.

insight • 1 week ago • Via Last Week in AI • time.com

AI Improvement Slowdown. OpenAI's upcoming model, code-named Orion, may not represent a significant advancement over its predecessors, as per a report in The Information.

insight • 1 week ago • Via Last Week in AI • techcrunch.com

AlphaFold 3 Open-Sourcing. Google DeepMind has released the source code and model weights of AlphaFold 3 for academic use, a move that could significantly speed up scientific discovery and drug development.

insight • 1 week ago • Via Last Week in AI • venturebeat.com

OpenAI's Custom Hardware. OpenAI partners with Broadcom and AMD to develop custom AI hardware, aiming for profitability and reducing inference costs.

insight • 1 week ago • Via Last Week in AI • www.theverge.com

AI Military Use. Meta's open-source models utilized by China's military prompt regulatory adjustments; US agencies gain access to counterbalance.

insight • 1 week ago • Via Last Week in AI • gizmodo.com

AI Regulation Alert. Anthropic warns of AI catastrophe if governments don't regulate in 18 months.

insight • 1 week ago • Via Last Week in AI • www.zdnet.com

AI Benchmark by OpenAI. OpenAI Releases SimpleQA: A New AI Benchmark that Measures the Factuality of Language Models.

insight • 1 week ago • Via Last Week in AI • www.marktechpost.com

Llama 3.2 Release. Meta Releases Quantized Llama 3.2 with 4x Inference Speed on Android Phones.

insight • 1 week ago • Via Last Week in AI • analyticsindiamag.com

Funding for xAI. Elon Musk's xAI in talks to raise funding valuing it at $40 billion, WSJ reports.

insight • 1 week ago • Via Last Week in AI • finance.yahoo.com

Meta and Reuters Deal. Meta strikes multi-year AI deal with Reuters.

insight • 1 week ago • Via Last Week in AI • www.axios.com

US AI Regulation. New U.S. regulation mandates quarterly reporting for large AI model training and computing cluster acquisitions, aiming to bolster national security.

insight • 1 week ago • Via Last Week in AI •

Robot Control Policy. Physical Intelligence unveils a generalist robot control policy with a $400M funding boost, showcasing significant advancements in zero-shot task performance.

insight • 1 week ago • Via Last Week in AI •

.

• 1 week ago • Via Simon Willison on Mastodon •

User Understanding. Users don’t (always) understand how the agent is different from LLMs/chatGPT.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

Emerging Ecosystem. We think that together they can provide a useful guide for others interested in following agents.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

Continuous Innovation. The journey towards building effective and ubiquitous autonomous AI agents is still one of continuous exploration and innovation.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

Improved Accuracy. We now have a scalable framework that improved upon answer accuracy significantly, from 50% perceived accuracy to up to 100% on specific high-impact use cases.

data point • 1 week ago • Via Artificial Intelligence Made Simple •

Task-Specific Agents. We are increasingly excited about task- and industry-specific agents which promise to offer tailored solutions that address specific challenges and requirements.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

Agent Adoption. Because users are sometimes uncomfortable with or intimidated by an iterative way of working, they give up quickly on prompt engineering.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

AI Efficiency. More comprehensive answers could provide a reason for building agents on its own.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

Learning Challenges. We’ve learned that building useful agents is, surprise surprise… hard.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

AI Agent Components. Broadly, agents are AI systems that can make decisions and take actions on their own, following general instructions from a user.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

AI User Base. The Prosus AI team helps solve real problems for the 2 billion users we collectively serve across companies in the Prosus Group.

data point • 1 week ago • Via Artificial Intelligence Made Simple •

Prosus Conference. The MLOps Community and Prosus will be having a free virtual conference on November 13th with over 40 speakers who are actively working with AI agents in production.

insight • 1 week ago • Via Artificial Intelligence Made Simple • home.mlops.community

Learning Budget. Many companies have a learning budget that you can expense this newsletter to.

data point • 1 week ago • Via Artificial Intelligence Made Simple •

Expert Insights. In the series Guests, I will invite these experts to come in and share their insights on various topics that they have studied/worked on.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

Market Impact. If the rumor is verified, there could be the AI equivalent of a bank run.

insight • 1 week ago • Via Gary Marcus on AI •

AI's Critical Test. What happens if suddenly people lose faith in that hypothesis?

recommendation • 1 week ago • Via Gary Marcus on AI •

Diminishing Returns. I strongly suspect that, contra Altman, we have in fact reached a point of diminishing returns for pure scaling.

insight • 1 week ago • Via Gary Marcus on AI •

Scaling Laws Limitations. Scaling laws are not physical laws; they are merely empirical generalizations that held for a certain period of time.

insight • 1 week ago • Via Gary Marcus on AI •

Scaling Beliefs. Sam Altman was still selling scaling as if it were infinite, describing it as a 'religious level belief'.

data point • 1 week ago • Via Gary Marcus on AI • x.com

Learning Dynamics. For the MatMul-free LM, the learning dynamics differ from those of conventional models, necessitating a different learning strategy.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

Ternary Weights Advantage. Using ternary weights allows for simple additions or subtractions instead of multiplications, greatly increasing computational efficiency.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

MatMul-free LLMs. MatMul-free models achieve performance on-par with state-of-the-art Transformers and significantly reduce memory usage.

data point • 1 week ago • Via Artificial Intelligence Made Simple • artificialintelligencemadesimple.substack.com

LLM Functionality. Our LLM functioned as the controller, which took a user query and then called the relevant script for a particular functionality.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

Neuro-Symbolic AI. AlphaGeometry combines an LLM and a symbolic engine in a 2-routine loop, significantly improving efficiency in solving geometry problems.

data point • 1 week ago • Via Artificial Intelligence Made Simple • artificialintelligencemadesimple.substack.com

Integration of Techniques. Constantly emphasizing a better tomorrow won't cause people to stop using these models today. It will stop them from thinking deeply about how to address the fundamental issues today.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

Misleading Claims. To turn a 'there are limits to blindly scaling LLMs' to 'LLMs as a whole are hitting diminishing returns' is a huge stretch.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

Hype in AI. There's a lot of hype that needs to be addressed, and we will create dangerous products if we keep pushing hype.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

Limitations of Deep Learning. Due to these limitations, it's best to pair it with other techniques- especially when control/transparency are at stake.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

Economics of LLMs. I think LLMs would still be economically viable in very high-cost avenues, where productivity gains can justify higher costs.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

Diverse Research Directions. We really need to look beyond LLMs to explore (and celebrate) more research directions.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

Agreement with Marcus. Right off the bat, I agree with the following claims- Scale won't solve general intelligence.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

Critique of Marcus. My goal in this article will be to talk about why I disagree heavily w/ Gary Marcus's claim that Deep Learning or LLMs are close to hitting a wall.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

AI Skeptics React. This has gotten a lot of AI Skeptics rejoicing since they can wave around their 'I told you so's.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

Gary Marcus' Claim. Gary Marcus released an article confirming that LLMs have indeed reached a point of diminishing returns.

data point • 1 week ago • Via Artificial Intelligence Made Simple • garymarcus.substack.com

Market Recognition. I’m glad that the market is finally recognizing that what I’ve been saying is true.

insight • 1 week ago • Via Gary Marcus on AI •

Truth Revealed. The thing is, in the long term, science isn’t majority rule. In the end, the truth generally outs.

insight • 1 week ago • Via Gary Marcus on AI •

Investment Misalignment. Meanwhile, precious little investment has been made in other approaches. If LLMs won’t get the US to trustworthy AI, and our adversaries invest in alternative approaches, we could easily be outfoxed.

recommendation • 1 week ago • Via Gary Marcus on AI •

Economic Concerns. The economics are likely to be grim. Sky high valuation of companies like OpenAI and Microsoft are largely based on the notion that LLMs will, with continued scaling, become artificial general intelligence.

insight • 1 week ago • Via Gary Marcus on AI •

Deep Learning Critique. In my most notorious article, in March of 2022, I argued that 'deep learning is hitting a wall'.

data point • 1 week ago • Via Gary Marcus on AI • nautil.us

Scaling Limits. For years I have been warning that 'scaling' — eeking out improvements in AI by adding more data and more compute, without making fundamental architectural changes — would not continue forever.

insight • 1 week ago • Via Gary Marcus on AI •

.

• 1 week ago • Via Simon Willison on Mastodon •

Article Link. Read more

data point • 1 week ago • Via Artificial Intelligence Made Simple • artificialintelligencemadesimple.substack.com

Consulting Services. I provide various consulting and advisory services.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

Email Template Link. You can use the following for an email template to request reimbursement for your subscription.

data point • 1 week ago • Via Artificial Intelligence Made Simple • docs.google.com

Learning Budget. Many companies have a learning budget that you can expense this newsletter to.

data point • 1 week ago • Via Artificial Intelligence Made Simple • docs.google.com

Discard Boring Datasets. Scrap the boring Iris datasets, no GPT + Vector DB spin-off, and no more Wine Price predictions.

recommendation • 1 week ago • Via Artificial Intelligence Made Simple •

High ROI Projects. I believe that they are one of highest ROI investments for an early career person.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

Quality Over Quantity. Not 5 or 10 mid-tier projects, but 1-3 very good ones.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

Essential Side Projects. The most crucial step to getting your first ML job is to have an amazing side project.

recommendation • 1 week ago • Via Artificial Intelligence Made Simple •

Target Audience. In this article, I will focus my advice on the group that I am most qualified to speak to- early career students looking for their first role in Machine Learning.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

Research Role Limitation. This leaves me ineligible for research roles at most Companies (which require either a MS or preferably a PhD).

data point • 1 week ago • Via Artificial Intelligence Made Simple •

Formal Education Impact. This means that I don’t ever see myself pursuing an upper-level degree.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

Machine Learning Jobs. A lot of people reach out to me with this question. Answering this question is complex and relies heavily on the person’s individual goals, interests, and skills.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

.

• 1 week ago • Via Simon Willison on Mastodon •

Synthetic Data Utilization. Synthetic data can be a way to create fake training data that 'feels like' real samples, enhancing model performance.

recommendation • 2 weeks ago • Via Artificial Intelligence Made Simple •

Amazon's Fairness Metrics. Amazon's publication showed that the usual metrics used to measure fairness reflect the biases of their datasets.

insight • 2 weeks ago • Via Artificial Intelligence Made Simple • www.amazon.science

Addressing Dataset Biases. Injecting diversity into your training data can save your performance and enhance representations.

recommendation • 2 weeks ago • Via Artificial Intelligence Made Simple •

Audit Recommendations. Engaging independent experts to verify transparency mechanisms and documentation ensures accountability.

recommendation • 2 weeks ago • Via Artificial Intelligence Made Simple •

Open Source Benefits. Open-source can reduce your R&D costs, help you identify + solve major issues, and get more people on your ecosystem.

recommendation • 2 weeks ago • Via Artificial Intelligence Made Simple •

Embedding Models Access. Giving access to embedding models will significantly improve the transparency and development of LLM-based solutions.

recommendation • 2 weeks ago • Via Artificial Intelligence Made Simple •

Self-Generated Data Issues. Models fed on self-generated training data tend to deteriorate over time.

insight • 2 weeks ago • Via Artificial Intelligence Made Simple •

Compounding Bias Origins. Unchecked dataset biases also feed into the way these models learn, compounding the bias.

insight • 2 weeks ago • Via Artificial Intelligence Made Simple •

Bias Control Necessity. We don’t want bias-free AI. We want an AI with biases that are explicit, controllable, and agreeable.

insight • 2 weeks ago • Via Artificial Intelligence Made Simple •

Transparency Importance. Transparency is the most important aspect of LLMs that we should be building on now.

insight • 2 weeks ago • Via Artificial Intelligence Made Simple •

Gemini's Hateful Flagging. Gemini flagged an image of a Middle Eastern Camel Salesman as 'unsafe', indicating problematic implications around AI bias.

insight • 2 weeks ago • Via Artificial Intelligence Made Simple •

.

• 2 weeks ago • Via Last Week in AI •

.

• 2 weeks ago • Via Simon Willison on Mastodon •

.

• 2 weeks ago • Via Simon Willison on Mastodon •

Media's Role. The media is failing. The individual incidents have all been reported, but seemingly nobody is putting it all together.

insight • 2 weeks ago • Via Gary Marcus on AI •

Growing Concerns. Disorientation, dysfluency, disinhibition, and challenges with motor control - we are seeing it over and over, and it’s obviously getting worse.

insight • 2 weeks ago • Via Gary Marcus on AI •

Unusual Behavior. Most incredibly, later yesterday, in front of a live national television audience, Trump performed simulated fellatio on a microphone stand.

data point • 2 weeks ago • Via Gary Marcus on AI • x.com

Disinhibition Signs. The chilling thing is that in the twelve hours after I posted that, we saw at least four MORE incidents, including more signs of foul language and disinhibition.

insight • 2 weeks ago • Via Gary Marcus on AI •

Trump's Warnings. Well over half a million people have viewed it on X: one of the bluntest warnings I have ever written.

data point • 2 weeks ago • Via Gary Marcus on AI •

Psychology Background. Before I turned full time to AI, the usual topic of this newsletter, I spent decades focused on human psychology, mostly as a full professor at NYU.

data point • 2 weeks ago • Via Gary Marcus on AI •

Dementia Behavior. As Meiselas put it on X, discussing the microphone incident, the fellatio incident fits with dementia: Yes, people with dementia can experience changes in sexual behavior, including confusion about sex: Inappropriate behavior.

insight • 2 weeks ago • Via Gary Marcus on AI •

Urgency of Coverage. With the election being on Tuesday, this is the most urgent post I have ever written. If you are in the mainstream media, or know someone who is, for the love of democracy, please cover Trump’s apparent dementia.

recommendation • 2 weeks ago • Via Gary Marcus on AI •

Call to Action. Dear Media, Don’t be cowed by the Goldwater rule: Call Dr. Lance Dodes. Reach out to the group Duty2Warn.

recommendation • 2 weeks ago • Via Gary Marcus on AI • x.com

.

• 2 weeks ago • Via Simon Willison on Mastodon •

ChatGPT Integration. The beta release also includes additional features such as Genmoji, Image Playground, Visual Intelligence, Image Wand, and ChatGPT integration.

insight • 2 weeks ago • Via Last Week in AI •

AI-Powered Transcription Tool Concerns. AI-powered transcription tool Whisper invents things no one ever said, leading to fabrications in transcriptions used in various industries, including medical settings.

insight • 2 weeks ago • Via Last Week in AI • abcnews.go.com

OpenAI's Chip Development. OpenAI is collaborating with Broadcom to develop custom silicon for AI workloads, while also incorporating AMD chips into its Microsoft Azure setup.

insight • 2 weeks ago • Via Last Week in AI • www.theverge.com

Google's AI Watermarking Tool. Google has developed an AI watermarking tool to identify AI-generated text, making it easier to distinguish between AI-generated and human-written content.

data point • 2 weeks ago • Via Last Week in AI • www.newscientist.com

Meta AI's Math Breakthrough. AI developed by Meta can solve century-old math problems involving Lyapunov functions, which were previously unsolvable.

data point • 2 weeks ago • Via Last Week in AI • www.newscientist.com

Waymo Funding Milestone. Waymo secures $5.6 billion in funding to expand its self-driving taxi program to more US cities, with plans to partner with Uber and a focus on safety and responsible execution.

data point • 2 weeks ago • Via Last Week in AI • techxplore.com

AI Investment Boom. The AI investment boom has led to a rapid increase in US fixed investment to meet the growth in computing demand, with companies investing in high-end computers, data center facilities, power plants, and more.

insight • 2 weeks ago • Via Last Week in AI • www.apricitas.io

Claude 3.5 Advancements. The article announces the introduction of an upgraded AI model, Claude 3.5 Sonnet, and a new model, Claude 3.5 Haiku, both of which show significant improvements in coding tasks.

data point • 2 weeks ago • Via Last Week in AI •

GitHub Copilot Expansion. GitHub is expanding its Copilot code completion and programming tool to include models from Anthropic, Google, and OpenAI, allowing developers to choose the model that best suits their needs.

data point • 2 weeks ago • Via Last Week in AI • www.theverge.com

Apple Intelligence Features. Apple has released the latest developer beta versions of its operating systems, including iOS 18.2, iPadOS 18.2, and macOS Sequoia 15.2, introducing new Apple Intelligence features.

data point • 2 weeks ago • Via Last Week in AI • techcrunch.com

Product Adoption. 79% of survey correspondents said they had tried Microsoft Copilot. That’s tremendous, given how new the product is.

data point • 2 weeks ago • Via Gary Marcus on AI •

Further Reading. I read a pair of stunning statistics from a new CNBC poll.

recommendation • 2 weeks ago • Via Gary Marcus on AI • www.cnbc.com

Market Sentiment. People aren’t ignoring GenAI; they are waiting to see if it will work.

insight • 2 weeks ago • Via Gary Marcus on AI •

Value Perception. Only 25% of the correspondents thought it was worth it.

data point • 2 weeks ago • Via Gary Marcus on AI •

Influencer Reaction. AI ignored? I dashed off a quick reply: practically everyone has tried it, but they are not always satisfied with the results.

insight • 2 weeks ago • Via Gary Marcus on AI •

Camus on Absurdity. The absurd is born of this confrontation between the human need and the unreasonable silence of the world.

insight • 2 weeks ago • Via Artificial Intelligence Made Simple •

Struggle Justification. He justifies the struggle because it preserves and enhances ordinary human moments and that is worth it.

insight • 2 weeks ago • Via Artificial Intelligence Made Simple •

Learning Budget Utilization. Many companies have a learning budget, and you can expense your subscription through that budget.

data point • 2 weeks ago • Via Artificial Intelligence Made Simple • docs.google.com

Absurd Hero Definition. An Absurd Hero is an active participant in the world who chooses their own values without needing validation from any other source.

insight • 2 weeks ago • Via Artificial Intelligence Made Simple •

Moment and Awareness. Real generosity towards the future lies in giving all to the present.

insight • 2 weeks ago • Via Artificial Intelligence Made Simple •

Gambling and Knowledge Limits. Gambling has several personal, economic, and societal benefits and is a great way to teach people to appreciate the limitations of our knowledge.

insight • 2 weeks ago • Via Artificial Intelligence Made Simple •

Recommendation for Living. Once we give up our desire to find these truths, we can focus on the simpler things that we can comprehend and control.

recommendation • 2 weeks ago • Via Artificial Intelligence Made Simple •

Paths Out of Absurdity. Ultimately, there are only 3 ways out of this state: Suicide, Philosophical Suicide, and Embracing the Absurd.

insight • 2 weeks ago • Via Artificial Intelligence Made Simple •

Value of Camus' Philosophy. I think Camus is generally a great antidote to the hopelessness we feel when asked to confront an unending task.

insight • 2 weeks ago • Via Artificial Intelligence Made Simple •

OpenAI Media Generation. OpenAI researchers develop new model that speeds up media generation by 50X.

data point • 3 weeks ago • Via Last Week in AI • venturebeat.com

ByteDance AI GPUs. TikTok owner ByteDance taps TSMC to make its own AI GPUs to stop relying on Nvidia.

data point • 3 weeks ago • Via Last Week in AI • www.tomhardware.com

Responsible Scaling Policy. Announcing our updated Responsible Scaling Policy.

data point • 3 weeks ago • Via Last Week in AI • www.anthropic.com

New AI Research Artifacts. Meta FAIR Releases Eight New AI Research Artifacts—Models, Datasets, and Tools to Inspire the AI Community.

data point • 3 weeks ago • Via Last Week in AI • www.maginative.com

xAI API Launch. Elon Musk's AI startup, xAI, launches an API.

data point • 3 weeks ago • Via Last Week in AI • techcrunch.com

NVIDIA Server Deployment. NVIDIA's Blackwell GB200 AI Servers Ready For Mass Deployment In December.

data point • 3 weeks ago • Via Last Week in AI • www.tomsguide.com

Canva AI Tool. Canva has a shiny new text-to-image generator.

data point • 3 weeks ago • Via Last Week in AI • www.theverge.com

AI Video Startup Launch. AI video startup Genmo launches Mochi 1, an open source rival to Runway, Kling, and others.

data point • 3 weeks ago • Via Last Week in AI • venturebeat.com

Anthropic AI Update. Anthropic's latest AI update can use a computer on its own.

data point • 3 weeks ago • Via Last Week in AI • www.theverge.com

AI News Summary. Our 187th episode with a summary and discussion of last week's big AI news, now with Jeremie co-hosting once again!

insight • 3 weeks ago • Via Last Week in AI •

Investment Motivations. Investors are drawn to plausible stories with big numbers, which allows them to make substantial fees from investing other people's money.

insight • 3 weeks ago • Via Gary Marcus on AI •

Precision in Predictions. Any serious scientist or engineer knows you can't possibly predict the future with that kind of precision, especially when there are so many unknowns.

insight • 3 weeks ago • Via Gary Marcus on AI •

Criticism of Hype. The audience of investors was potentially misled into believing that AI advancements are more controlled and predictable than they are.

insight • 3 weeks ago • Via Gary Marcus on AI •

Questionable Intelligence Claims. Masayoshi Son stated that 'Artificial Super Intelligence' would be 10,000 times smarter than humans, predicting it would arrive in 2035.

data point • 3 weeks ago • Via Gary Marcus on AI • x.com

AI Improvement Claims. Elon Musk stated, 'I feel comfortable saying that AI is getting 10 times better each year' without specifying any measure.

data point • 3 weeks ago • Via Gary Marcus on AI • x.com

Futuristic Robot Predictions. Elon Musk said, 'I think by 2040 probably there are more humanoid robots than there are people.'

data point • 3 weeks ago • Via Gary Marcus on AI • qz.com

.

• 3 weeks ago • Via Simon Willison on Mastodon •

.

• 3 weeks ago • Via Simon Willison on Mastodon •

.

• 3 weeks ago • Via Simon Willison on Mastodon •

.

• 3 weeks ago • Via Simon Willison on Mastodon •

.

• 3 weeks ago • Via Simon Willison on Mastodon •

.

• 3 weeks ago • Via Simon Willison on Mastodon •

.

• 3 weeks ago • Via Simon Willison on Mastodon •

.

• 4 weeks ago • Via Simon Willison on Mastodon •

.

• 4 weeks ago • Via Simon Willison on Mastodon •

Preempting Rounds.

insight • 4 weeks ago • Via Artificial Intelligence Made Simple •

Maintain Relationships.

recommendation • 4 weeks ago • Via Artificial Intelligence Made Simple •

Fundraising Timeline.

data point • 4 weeks ago • Via Artificial Intelligence Made Simple •

Investor Connections.

insight • 4 weeks ago • Via Artificial Intelligence Made Simple •

Investment Criteria.

data point • 4 weeks ago • Via Artificial Intelligence Made Simple •

Identify Investors.

insight • 4 weeks ago • Via Artificial Intelligence Made Simple •

Founder Attraction.

insight • 4 weeks ago • Via Artificial Intelligence Made Simple •

Funding Challenges.

data point • 4 weeks ago • Via Artificial Intelligence Made Simple •

Learning Budget Support.

recommendation • 4 weeks ago • Via Artificial Intelligence Made Simple • docs.google.com

Expert Insights Series.

insight • 4 weeks ago • Via Artificial Intelligence Made Simple •

.

• 4 weeks ago • Via Simon Willison on Mastodon •

Regulatory Challenges for Tesla. Tesla's plans for 'unsupervised FSD' and robotaxis could face regulatory challenges in California and Texas due to the need for permits and exemptions.

recommendation • 4 weeks ago • Via Last Week in AI • techcrunch.com

AI Child Abuse Risks. AI chatbot service Muah.AI is being used to request and potentially generate child-sexual-abuse material, highlighting the broader issue of AI's potential for abuse.

concern • 4 weeks ago • Via Last Week in AI • www.theatlantic.com

Controversial Perplexity Lawsuit. Perplexity is facing a lawsuit from Dow Jones and the New York Post for allegedly creating fake sections of news stories and falsely attributing them to publishers.

insight • 4 weeks ago • Via Last Week in AI • wired.com

AI Fraud Recovery. AI has helped the US Treasury Department recover $1 billion worth of check fraud in fiscal 2024, nearly triple the amount recovered in the prior fiscal year.

data point • 4 weeks ago • Via Last Week in AI • www.cnn.com

ChatGPT Traffic Milestone. ChatGPT's web traffic has been steadily increasing, reaching 3.1 billion visits in September 2024, marking a significant growth compared to the previous year.

data point • 4 weeks ago • Via Last Week in AI • www.similarweb.com

ByteDance Sabotage Incident. ByteDance confirmed that an intern was fired in August for planting malicious code in its AI models.

data point • 4 weeks ago • Via Last Week in AI • arstechnica.com

Elon Musk's API Launch. Elon Musk's AI startup, xAI, has launched an API for its flagship generative AI model, Grok.

data point • 4 weeks ago • Via Last Week in AI • techcrunch.com

Perplexity Valuation Rise. Perplexity AI, an artificial intelligence search engine startup, is aiming to raise its valuation to approximately $9 billion in its upcoming funding round, a significant increase from its $3 billion valuation in June.

data point • 4 weeks ago • Via Last Week in AI • www.cnbc.com

Meta AI Artifacts. Meta's Fundamental AI Research (FAIR) team has unveiled eight new AI research artifacts, including models, datasets, and tools, aimed at advancing machine intelligence.

data point • 4 weeks ago • Via Last Week in AI • www.maginative.com

.

• 4 weeks ago • Via Simon Willison on Mastodon •

Adobe AI Video Tools. Adobe's AI video model is here, and it's already inside Premiere Pro.

data point • 1 month ago • Via Last Week in AI • www.theverge.com

AI Podcast Episode. Our 186th episode with a summary and discussion of last week's big AI news!

data point • 1 month ago • Via Last Week in AI •

Google's Nuclear Project. Google will help build seven nuclear reactors to power its AI systems.

insight • 1 month ago • Via Last Week in AI • finance.yahoo.com

LLMs' Reasoning Limitations. LLMs can't perform 'genuine logical reasoning,' Apple researchers suggest.

insight • 1 month ago • Via Last Week in AI • arstechnica.com

OpenAI Content Deal. OpenAI announces content deal with Hearst, including content from Cosmopolitan, Esquire and the San Francisco Chronicle.

data point • 1 month ago • Via Last Week in AI • www.cnbc.com

YouTube AI Expansion. YouTube expands AI audio generation tool to all U.S. creators.

data point • 1 month ago • Via Last Week in AI • www.socialsamosa.com

AI Catalyst Event. Check out Jon's upcoming agent-focused event here - AI Catalyst: Agentic Artificial Intelligence.

recommendation • 1 month ago • Via Last Week in AI • www.oreilly.com

Future of AI Development. If the answer isn't bigger LLMs, we may have wasted half a decade.

insight • 1 month ago • Via Gary Marcus on AI •

Societal Risks. If things do fall apart, it is not just investors who stand to lose, but society. Immense resources may end up being wasted in vain, because of hype.

insight • 1 month ago • Via Gary Marcus on AI •

Reflection on AI. AI ought to be looking itself in the mirror, too, right about now.

insight • 1 month ago • Via Gary Marcus on AI •

Hype in AI. The combination of made-up graphs and outsized promises could make a person nervous.

insight • 1 month ago • Via Gary Marcus on AI •

Imaginary Data. The curve, so far as I know, is just made up. I do not know any measure that pegs the delta between GPT-4 and o1 (marked as 'today') as being triple the delta between GPT-3 and GPT-4.

insight • 1 month ago • Via Gary Marcus on AI •

Critical Graphs. As I wrote on X, the graph 'is a fantasy about the future, and not at all obvious that the 'data' plotted correspond to anything real.'

insight • 1 month ago • Via Gary Marcus on AI •

Theranos Comparison. But sometimes I have heard others compare OpenAI to Theranos, which was basically a fraud. Another charismatic founder, another ridiculous valuation, and another collapse.

insight • 1 month ago • Via Gary Marcus on AI •

Comparisons to WeWork. When I think of OpenAI, I often think of WeWork: charismatic founder, immense valuation, questionable business plan, and the possibility of similar immense deflation in their valuation, if confidence wavers.

insight • 1 month ago • Via Gary Marcus on AI •

Meta's Actions on Sextortion. Meta has acted on our recommendations to protect teenagers from sextortion.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Four-Year Journey. We’ve reached over 10 Million people overall and are only growing.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Sourcing Assistance. My work has about 150-200K views a week, many of whom are founders and prospective founders of future software/AI companies.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

VC Engagement. I’m looking to learn more about Venture Capital to help with my end goals of having my own AI Lab.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Victim Support Partnership. We’re also partnering with Crisis Text Line in the US to provide people with free, 24/7, confidential mental health support.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Screenshot Prevention. Soon, we’ll no longer allow people to use their device to directly screenshot or screen record ephemeral images or videos.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Follower List Restrictions. Removing access to follower lists is a very simple, but extremely effective way to stop scams.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Improved Scammer Detection. Meta will start using various signals to identify and flag accounts that might be blackmailers.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

.

• 1 month ago • Via Simon Willison on Mastodon •

LLM Usage Risks. The more people use LLMs, the more trouble we are going to be in.

insight • 1 month ago • Via Gary Marcus on AI •

Need for AI Activism. Gary Marcus is quite concerned that nobody is really talking about tech policy, when so much is at stake.

recommendation • 1 month ago • Via Gary Marcus on AI •

Ethical Guardrails Failure. The possibilities are now endless for propaganda, troll farms, and rings of fake websites that degrade trust across the internet.

data point • 1 month ago • Via Gary Marcus on AI •

Predictable Issues. Jailbreaks aren’t new, but even after years of them, the tech industry has nothing like a robust response.

insight • 1 month ago • Via Gary Marcus on AI •

Civilian Threats. If the attack were carried out in the real world, people could be socially engineered into believing the unintelligible prompt might do something useful.

insight • 1 month ago • Via Gary Marcus on AI •

Vulnerable Robotics. Companies like Google, Tesla, and Figure.AI are now stuffing jailbreak-vulnerable LLMs into robots.

insight • 1 month ago • Via Gary Marcus on AI •

Imprompter Attacks. The Imprompter attacks on LLM agents start with a natural language prompt that tells the AI to extract all personal information from the user's conversation.

insight • 1 month ago • Via Gary Marcus on AI • www.wired.com

Jailbreaking Concerns. Both concerns jailbreaking: getting LLMs to do bad things by evading often simplistic guardrails.

insight • 1 month ago • Via Gary Marcus on AI •

Sharing Personal Data. There is a temptation for some people (and some businesses) to share their very personal information with LLMs.

insight • 1 month ago • Via Gary Marcus on AI •

.

• 1 month ago • Via Simon Willison on Mastodon •

Neglect of Real Issues. By prioritizing moral alignment, we give LLM providers a pass from addressing the much more real concerns that plague these systems currently.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Preventing Sextortion. Instagram can mitigate a vast majority of financial sextortion cases by hiding minors' Followers and Following lists on their platform by default.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Sextortion Increase. The tenfold increase of sextortion cases in the past 18 months is a direct result of instructional videos and scripts being distributed on platforms like TikTok.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Impact of Child Labor. Child labor allows contractors to undercut their competition, leading tech companies to choose suppliers based on cheaper costs.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Child Labor in Tech. Many tech companies rely on suppliers that are known to use child labor in their supply chains.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

High Error Rate. I got close to 50% error (5 out of 11 times, it didn’t match their output) when testing OpenAI’s diagnostic claims.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

OpenAI's Medical Claims. OpenAI claims that their o1 model is really good at Medical Diagnosis, being able to diagnose diseases given a phenotype profile.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Morally Aligned AGI. Arguments for LLM moral alignment entail building a system focused on Peace and Love, with the belief that AGI could otherwise harm humans.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

LLM Safety Series. This article is the final part of our mini-series about Language Model Alignment.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Neglect for Transparency. OpenAI could have instead been much more open about their model's limitations.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Nobel Prize for AI. John J. Hopfield and Geoffrey E. Hinton have been awarded the Nobel Prize in Physics for their groundbreaking work in the development of neural networks.

data point • 1 month ago • Via Last Week in AI • www.nytimes.com

AI Safety Clock. The AI Safety Clock warns of potential doomsday scenarios and the need for global regulation and company responsibility to ensure safe AI development.

insight • 1 month ago • Via Last Week in AI • time.com

AI Chatbots Threat. AI chatbots can read and write invisible text, creating a covert channel for attackers to conceal and exfiltrate confidential data, posing a significant security threat.

insight • 1 month ago • Via Last Week in AI • arstechnica.com

AI's Potential Dangers. Amodei acknowledges the potential dangers of AI to civil society and the need for discussions about economic organization in a post-AI world.

insight • 1 month ago • Via Last Week in AI •

AI Predictions by CEO. Anthropic CEO Dario Amodei predicts that 'powerful AI', capable of outperforming Nobel Prize winners, will emerge by 2026.

insight • 1 month ago • Via Last Week in AI • techcrunch.com

Tesla's Advanced Vehicles. At Tesla's 'We Robot' event, Elon Musk introduced futuristic vehicles, including the Cybercab and the Robovan.

data point • 1 month ago • Via Last Week in AI • venturebeat.com

Adobe's AI Video Model. Adobe has launched its AI video model, Firefly, which includes several new tools for video generation and editing.

data point • 1 month ago • Via Last Week in AI • www.theverge.com

AI Revolutionizes Protein Understanding. Hassabis and Jumper utilized AI to predict the structure of millions of proteins, while Baker employed computer software to invent a new protein.

insight • 1 month ago • Via Last Week in AI •

Transformative Impact. The Nobel committee highlighted the transformative impact of Hopfield and Hinton's work, stating that their machine learning breakthroughs have provided a new way to use computers to address societal challenges.

insight • 1 month ago • Via Last Week in AI •

AI Investment Surge. Goldman Sachs estimates companies will spend $1 trillion to use AI chatbots in their operations.

data point • 1 month ago • Via Last Week in AI •

Nobel Prize in Chemistry. The Nobel Prize in Chemistry has been awarded to three scientists for their groundbreaking work in predicting and creating proteins using advanced technology.

data point • 1 month ago • Via Last Week in AI • www.nytimes.com

.

• 1 month ago • Via Simon Willison on Mastodon •

.

• 1 month ago • Via Simon Willison on Mastodon •

Versatile Applications of BEAST. BEAST can be used for various adversarial tasks, including jailbreaking, hallucination induction, and privacy attacks.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Success Rates Comparison. ACG achieves up to 84% success rates at attacking GPT-3.5 and GPT-4, significantly outperforming GCG.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Gradient-Free Optimization. BEAST does not rely on gradients, allowing it to be faster than traditional optimization-based attacks.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Cost-Effective Attacks. Chaining the MCTS and EA together to cut down on costs can be a strategic approach to maximizing attack efficiency.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Beam Search-Based Method. BEAST uses beam search to quickly explore adversarially generated prompts, maintaining a balance between speed and effectiveness.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

ACG Attack Advantage. ACG maintains a buffer of recent successful attacks, helping guide the search process and reduce noise.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Bijection Learning Utility. Bijection learning is interesting because it generalizes encoding-based jailbreaks using arbitrary mappings that are learned in-context by the target model.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

MCTS Effectiveness. MCTS’s tree-based approach handles the large branching factor inherent in language generation tasks, allowing for a balance between exploitation and exploration.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Automated Red-Teaming Techniques. Haize Labs breaks LLMs by employing techniques like multiturn jailbreaks via Monte Carlo Tree Search (MCTS) and bijection learning.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Red-Teaming Importance. Red teaming has 2 core uses: it ensures that your model is 'morally aligned' and helps you spot weird vulnerabilities and edge cases that need to be patched/improved.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Cerebras IPO Filing. Cerebras, an A.I. Chipmaker Trying to Take On Nvidia, Files for an I.P.O.

data point • 1 month ago • Via Last Week in AI • www.nytimes.com

Waymo-Hyundai Partnership. Waymo to add Hyundai EVs to robotaxi fleet under new multiyear deal.

data point • 1 month ago • Via Last Week in AI • www.cnbc.com

Google AI Ads Expansion. Google brings ads to AI Overviews as it expands AI's role in search.

data point • 1 month ago • Via Last Week in AI • techcrunch.com

OpenAI's VC Round. OpenAI closes the largest VC round of all time.

data point • 1 month ago • Via Last Week in AI • techcrunch.com

California AI Policy Debate. AI policy discussions intensify as California's vetoed bill sparks debates on regulation, alongside Google's $1 billion investment to expand AI infrastructure in Thailand.

insight • 1 month ago • Via Last Week in AI •

Microsoft-OpenAI Moves. Microsoft and OpenAI's strategic advancements highlight significant financial moves and AI enhancements, including Microsoft's enhanced Copilot.

insight • 1 month ago • Via Last Week in AI •

Mio's Foundation Model. Mio's foundation model and Apple's Depth Pro enhance multimodal AI inputs and precise 3D imaging for AR, VR, and robotics.

insight • 1 month ago • Via Last Week in AI •

Meta's MovieGen Features. Meta's MovieGen introduces innovative features in AI video generation, alongside OpenAI's real-time speech API and expanded ChatGPT capabilities.

insight • 1 month ago • Via Last Week in AI •

AI Story Creation for Kids. AI reading coach startup Ello now lets kids create their own stories.

data point • 1 month ago • Via Last Week in AI • techcrunch.com

.

• 1 month ago • Via Simon Willison on Mastodon •

.

• 1 month ago • Via Simon Willison on Mastodon •

.

• 1 month ago • Via Simon Willison on Mastodon •

Reading Recommendations. A lot of people reach out to me for reading recommendations, so I will start sharing interesting AI Papers/Publications, books, videos, etc.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Productivity Insights from Copilot. Less experienced developers accept AI-generated suggestions more frequently than their more experienced counterparts.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Weight Loss Industry Concerns. The episode dives into the dirty business of Big Pharma and weight loss drugs, spotlighting historical distrust.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Mathematical Reasoning Limitations. The performance of all models declines significantly as the number of clauses in a question increases, highlighting a fragility in mathematical reasoning.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

AI and Conspiracy Beliefs. AI is surprisingly effective in countering conspiracy beliefs, even against true believers.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Energy-efficient Models. The new L-Mul algorithm can potentially reduce 95% energy cost by applying it in tensor processing hardware.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Legislators' Role. Legislators cannot sit idle by while AI technology is further developed and distributed to the public.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Generative AI Copyright. The article presents a convincing case for why AI companies should either compensate or seek permission from copyright holders to use their data for training purposes.

insight • 1 month ago • Via Artificial Intelligence Made Simple • www.futuristiclawyer.com

Community Spotlight. Dave Farley runs the excellent YouTube channel Continuous Delivery where he shares insights on software engineering.

insight • 1 month ago • Via Artificial Intelligence Made Simple • www.linkedin.com

AI Subreddit. We started an AI Made Simple Subreddit to keep the community engaged.

data point • 1 month ago • Via Artificial Intelligence Made Simple • www.reddit.com

Focus Areas. The focus will be on AI and Tech, but the ideas might range from business, philosophy, ethics, and much more.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Hinton's Contribution. Hinton has made major contributions, but the citation seems to indicate he won it for inventing back-propagation, but, well, he didn’t.

insight • 1 month ago • Via Gary Marcus on AI •

Critique of LLMs. The anti-neurosymbolic tradition is limited; its most visible manifestation, LLMs, has brought fame and money, but no robust solution to solving any particular problem with great reliability.

insight • 1 month ago • Via Gary Marcus on AI •

Need for New Approach. It's time for a new approach, and Hassabis sees that; his open-mindedness will serve the field well.

recommendation • 1 month ago • Via Gary Marcus on AI •

Neurosymbolic AI. It is, as far as I can tell, the first Nobel Prize for Neurosymbolic AI.

data point • 1 month ago • Via Gary Marcus on AI •

Divergent Paths. Hinton and Hassabis represent two different paths forward in AI, with Hinton favoring back-propagation and Hassabis advancing neurosymbolic AI.

insight • 1 month ago • Via Gary Marcus on AI •

AlphaFold Significance. AlphaFold is a huge contribution to both chemistry and biology and is arguably one of the two biggest contributions of AI to date.

insight • 1 month ago • Via Gary Marcus on AI •

Response to Hinton's Award. Even Steve Hanson, a long-time Hinton defender, acknowledged 'we agree on the fact that the "Scientific committee of the Nobel committee" didn't know the N[eural] N[etwork] history very well'.

insight • 1 month ago • Via Gary Marcus on AI •

Werbos's Priority. Paul Werbos developed back propagation into its modern form for his 1974 Harvard PhD thesis.

data point • 1 month ago • Via Gary Marcus on AI • mailman.srv.cs.cmu.edu

Nobel Prize Winners. Not one but two Nobel Prizes went to AI this week.

data point • 1 month ago • Via Gary Marcus on AI •

.

• 1 month ago • Via Simon Willison on Mastodon •

.

• 1 month ago • Via Simon Willison on Mastodon •

Competition Landscape. OpenAI faces growing competition from rivals such as Google and Amazon.

insight • 1 month ago • Via Last Week in AI •

California AI Law Blocked. California's new AI law, AB 2839, has been temporarily blocked by a federal judge due to concerns about its broad and potentially unconstitutional nature.

insight • 1 month ago • Via Last Week in AI • techcrunch.com

New Logo Controversy. OpenAI's staff is shocked and alarmed by the proposed new logo, preferring to keep the current hexagonal flower symbol.

insight • 1 month ago • Via Last Week in AI • fortune.com

Safety vs Profit. AI Safety culture confronts capitalism as leading AI labs grapple with the challenge of prioritizing safety over profit.

insight • 1 month ago • Via Last Week in AI • www.interconnects.ai

Corporate Moves. Durk Kingma, co-founder of OpenAI, has announced his move to Anthropic, expressing excitement to contribute to the development of responsible AI systems.

data point • 1 month ago • Via Last Week in AI • techcrunch.com

Generative AI Concerns. Despite its success, the NotebookLM tool is not immune from issues that affect generative AI, such as hallucinations and bias.

insight • 1 month ago • Via Last Week in AI •

Flux Model Release. Black Forest Labs has released a new, faster text-to-image model called Flux 1.1 Pro, which is six times faster than its predecessor.

data point • 1 month ago • Via Last Week in AI • venturebeat.com

Podcast Creation. Google's study software, NotebookLM, is being utilized by users to create AI-generated podcasts, generating engaging audio in an 'upbeat, hyper-interested tone'.

insight • 1 month ago • Via Last Week in AI •

DevDay Innovations. OpenAI has announced several new tools at its 2024 DevDay, including a public beta of its 'Realtime API' for building apps with low-latency, AI-generated voice responses.

insight • 1 month ago • Via Last Week in AI • techcrunch.com

User Growth. ChatGPT has gained 250 million weekly active users.

data point • 1 month ago • Via Last Week in AI •

Investment Details. Microsoft contributed $750 million on top of its previous $13 billion investment.

data point • 1 month ago • Via Last Week in AI •

Funding Milestone. OpenAI has raised $6.6 billion in a new funding round, led by Thrive Capital, valuing the company at $157 billion.

data point • 1 month ago • Via Last Week in AI • techcrunch.com

Focus on AI Safety. Given how important (but misunderstood) the topic is, I have decided to orient our next few pieces on safe and responsible AI.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Human-In-Loop Learning. Integrating human feedback into the LLM training process can improve the LLM's ability to align with human preferences and values.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Dynamic Benchmarking. Dynamic benchmarking platforms, where new test cases are continuously added and models are re-evaluated, can provide a more accurate and up-to-date assessment of LLM capabilities.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Gamified Evaluation. Making the evaluation process more engaging and interactive can motivate evaluators and improve their performance.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Bias Awareness Training. Providing evaluators with training on bias awareness can help them recognize and mitigate their own biases.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Diversity Matters. Ensuring a diverse pool of evaluators, representing different backgrounds, perspectives, and lived experiences, is crucial for mitigating bias in the evaluation process.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Responsible AI Evaluation. LLMs should be evaluated for bias, safety, truthfulness, and privacy to ensure responsible development and deployment.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Effective Test Sets. Test sets should be challenging enough to differentiate between various LLM capabilities and weaknesses.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

New Evaluation Pillars. CTH seeks to solve issues around evaluation with six pillars including consistency, scoring criteria, differentiating, user experience, responsible practices, and scalability.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Human Evaluation Costs. Human evaluation is expensive and time-consuming, hindering wider adoption.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Fluency Misleading. LLMs are so good at generating fluent text that we often mistake it for being factually correct or useful.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Challenges in Evaluations. Current evaluation methods often neglect cognitive biases and user experience (UX) principles, leading to unreliable and inconsistent results.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Evaluations Consequences. The authors present a new 'ConSiDERS-The-Human Framework' (CTH for conciseness) to tackle this.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

ConSiDERS Framework Introduction. Amazon's 'ConSiDERS—the human-evaluation framework: Rethinking human evaluation for generative large language models' is a welcome departure, as it attempts to tackle a very real issue in the evaluation of Language Models.

insight • 1 month ago • Via Artificial Intelligence Made Simple • www.amazon.science

.

• 1 month ago • Via Simon Willison on Mastodon •

Speculative Promises. Almost every putative virtue of AI (aside from making a small number of people rich) has been promissory.

insight • 1 month ago • Via Gary Marcus on AI •

Divert Funding. Diverting funding from chatbot and movie synthesis machines to more focused efforts around special-purpose AI addressing climate change might make more sense.

recommendation • 1 month ago • Via Gary Marcus on AI •

Transparency Required. We can’t, or at least shouldn’t, place massive bets like these without transparency and accountability.

recommendation • 1 month ago • Via Gary Marcus on AI •

AI Misalignment. LLMs are not the AI we need to address climate change; they are the AI we would use if we wanted to risk serious harm to the climate.

insight • 1 month ago • Via Gary Marcus on AI •

Environmental Harm. The case that AI will do serious harm to the environment if we continue on the current path is actually much stronger.

insight • 1 month ago • Via Gary Marcus on AI •

Potential Conflicts. Schmidt has a large stake in the companies building AI, and it is important to take those potential conflicts of interest seriously.

insight • 1 month ago • Via Gary Marcus on AI •

Climate Goals Skepticism. Schmidt concludes 'My own opinion is that we’re not going to hit the climate goals anyway because we are not organized to do it.'

insight • 1 month ago • Via Gary Marcus on AI •

AI Energy Concerns. Eric Schmidt argued that, despite AI’s rapacious energy demands, he would rather bet on AI solving the problem than try to constrain AI.

insight • 1 month ago • Via Gary Marcus on AI • x.com

.

• 1 month ago • Via Simon Willison on Mastodon •

Consulting Services. I provide various consulting and advisory services.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Learning Budget. Many companies have a learning budget that you can expense this newsletter to.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Article Access. For access to this article and all future articles, get a premium subscription below.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple • artificialintelligencemadesimple.substack.com

Tutoring Experience. I know it’s helped a lot of other people ... it was very helpful to them.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Resource Recommendation. Reading cutting edge work solving real problems, even if you understand very little is recommended.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Unstructured Learning. If you’re someone who absolutely requires structure ... you probably will have a hard time with this approach.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Commitment Required. This approach will require at least 4-5 hours weekly ... you will start to see improvements in the first 2 months.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Learning Approach. The standard advice for learning Machine Learning ... is a bad base to build your knowledge around.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Different Learning Paths. Different goals require different actions. Trying to follow one path will lead to inefficient results.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Dynamic Knowledge Filtering. Success in AI ... is more about having the ability to filter through constantly shifting ... knowledge sources.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

AI Video Tools. AI is being rapidly integrated into various sectors, with examples including ChartWatch reducing unexpected hospital deaths, Snapchat and YouTube introducing AI video generation tools, and Lionsgate partnering with Runway for AI-assisted film production.

insight • 1 month ago • Via Last Week in AI •

AI Assistant Upgrades. OpenAI, Meta, and Google are enhancing their AI assistants with advanced voice modes, while Meta released Llama 3.2, an open-source model capable of processing both images and text.

insight • 1 month ago • Via Last Week in AI •

AI in Media Production. Lionsgate Signs Deal With AI Company Runway, Hopes That AI Can Eliminate Storyboard Artists and VFX Crews.

insight • 1 month ago • Via Last Week in AI • www.cartoonbrew.com

Deepfake Legislation. Governor Newsom signs bills to combat deepfake election content.

data point • 1 month ago • Via Last Week in AI • www.gov.ca.gov

Effective Research Findings. Recent research shows chain-of-thought prompting is most effective for math and symbolic reasoning, while OpenAI's GPT-4 with vision capabilities is being integrated into Perplexity AI's search platform.

insight • 1 month ago • Via Last Week in AI • arxiv.org

AI Infrastructure Advances. Significant AI infrastructure developments include Grok's partnership with Aramco for a massive data center in Saudi Arabia, and Microsoft's plan to power data centers using a reopened Three Mile Island nuclear plant.

insight • 1 month ago • Via Last Week in AI • www.npr.org

AI in Healthcare. AI tool cuts unexpected deaths in hospital by 26%, Canadian study finds.

data point • 1 month ago • Via Last Week in AI • www.cbc.ca

IDE Benefits. A good IDE probably is a much bigger, much less expensive, much less hyped improvement that helps more people more reliably.

recommendation • 1 month ago • Via Gary Marcus on AI •

Use AI Appropriately. Use it to type faster, not as a substitute for clear thinking about algorithms + data structures.

recommendation • 1 month ago • Via Gary Marcus on AI •

Conceptual Understanding. 10x-ing requires deep conceptual understanding – exactly what GenAI lacks.

insight • 1 month ago • Via Gary Marcus on AI •

Hype vs. Reality. The tracks here are pointing to modest improvements, with some potentials costs for security and technical debt, not 10x improvement.

insight • 1 month ago • Via Gary Marcus on AI •

Long-Term Risks. Users writing less secure code could lead to a net loss of productivity long term.

insight • 1 month ago • Via Gary Marcus on AI • papers.ssrn.com

Quality Concerns. An earlier study showed 'downward pressure on code quality'.

data point • 1 month ago • Via Gary Marcus on AI • www.gitclear.com

Mixed Results. Another somewhat more positive study shows moderate (26%, not 1000%) improvement for junior developers, and only 'marginal gains' for senior developers.

data point • 1 month ago • Via Gary Marcus on AI •

Limited AI Benefits. One result with 800 programmers shows little improvement and more bugs.

data point • 1 month ago • Via Gary Marcus on AI • www.cio.com

Productivity Claims. The data are coming in – and it’s not.

insight • 1 month ago • Via Gary Marcus on AI •

Job Retention Issues. Many staff, many of whom have left, perhaps out of a sense that the mission had been abandoned.

insight • 1 month ago • Via Gary Marcus on AI •

Public Benefit Requirement. The code should be opened for the public benefit.

recommendation • 1 month ago • Via Gary Marcus on AI •

Further Reading. You can read more about their analysis here.

data point • 1 month ago • Via Gary Marcus on AI • www.citizen.org

Transition Cost Proposal. The advocacy group Public Citizen has a proposal: the change from nonprofit should cost at least 20% of the business, perhaps more.

recommendation • 1 month ago • Via Gary Marcus on AI •

OpenAI's Shift. Now OpenAI wants to renege on its promises, and become a for-profit.

insight • 1 month ago • Via Gary Marcus on AI •

OpenAI's Advanced Voice Mode. OpenAI has announced the rollout of its Advanced Voice Mode (AVM) to a broader set of ChatGPT's paying customers, with the update including five new voices and enhanced speech naturalness.

insight • 1 month ago • Via Last Week in AI • techcrunch.com

AI Regulation Veto. California Governor Gavin Newsom vetoed a pioneering bill aimed to establish safety measures for large AI models, citing concerns about the bill's applicability to high-risk environments.

insight • 1 month ago • Via Last Week in AI • www.cbsnews.com

Meta's Llama 3.2. Meta has released Llama 3.2, the first of its large open-source models capable of processing both images and text.

insight • 1 month ago • Via Last Week in AI • www.theverge.com

OpenAI Funding Goals. OpenAI's CFO tells investors the funding round should close by next week despite the executive departures.

data point • 1 month ago • Via Last Week in AI • www.cnbc.com

OpenAI Restructuring. OpenAI is undergoing a significant transition as it seeks to become more appealing to external investors, including a shift towards becoming a for-profit business and potentially raising one of the largest funding rounds in recent history.

insight • 1 month ago • Via Last Week in AI • www.nytimes.com

Executive Departures. Multiple high-ranking employees resigned last week, including Chief Technical Officer Mira Murati, Chief Research Officer Bob McGrew, and VP of Research Barret Zoph, who expressed support for OpenAI despite their departure.

insight • 1 month ago • Via Last Week in AI • www.nytimes.com

Concerns on AI Security. AI safety controls can be bypassed by translating malicious requests into math equations, posing a critical vulnerability.

concern • 1 month ago • Via Last Week in AI • www.csoonline.com

AI Investor Interest. Middle Eastern sovereign wealth funds have increased funding for Silicon Valley's AI companies fivefold in the past year, showing strong interest in the AI sector.

data point • 1 month ago • Via Last Week in AI • www.cnbc.com

Duolingo's New Features. Duolingo announced its AI-powered Adventures mini-games and Video Call feature to enhance language learning.

insight • 1 month ago • Via Last Week in AI • venturebeat.com

Meta AI Features. Meta's AI can now talk to users in the voices of celebrities like Awkwafina and John Cena, enabling a more engaging interaction experience.

insight • 1 month ago • Via Last Week in AI • www.theverge.com

Independent Analysis. I put a lot of effort into creating work that is informative, useful, and independent from undue influence.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Dynamic Adjustments. The model constantly receives feedback from the classifier and adjusts its output accordingly, leading to a final generated text that is both coherent and aligned with the safety/goal guidelines.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Fine-Tuning Limitations. Fine-tuning a large language model requires significant computational resources and time, making it impractical to train separate models for every desired attribute combination.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Robustness through Noise. DGLM incorporates Gaussian noise augmentation during the training of the decoder.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Decoupled Training. DGLM effectively decouples attribute control from the training of the core language model.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Toxicity Reduction. Increasing guidance reduces toxicity with minimal loss of fluency.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Improved Performance. DGLM consistently outperforms existing plug-and-play methods in tasks like toxicity mitigation and sentiment control.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Single Classifier Control. Further, controlling a new attribute in our framework is reduced to training a single logistic regression classifier.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

New Approach. Their novel framework for controllable text generation combines the strengths of auto-regressive and diffusion models.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Learning Budget. Many companies have a learning budget that you can expense this newsletter to.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

.

• 1 month ago • Via Simon Willison on Mastodon •

Changes in Financing. Perhaps more likely is that the company will make some fairly major concessions to get it over the line.

recommendation • 1 month ago • Via Gary Marcus on AI •

Industry Impact. If they stumble, it will have ripple effects.

insight • 1 month ago • Via Gary Marcus on AI •

Risk of Collapse. If the round does fall apart – and investors back down - OpenAI could be in trouble.

insight • 1 month ago • Via Gary Marcus on AI •

Concerns Over Cash. OpenAI probably doesn’t have a lot of cash on hand.

insight • 1 month ago • Via Gary Marcus on AI •

Operating Loss. Their operating loss last year is said to be on the order of $5 billion.

data point • 1 month ago • Via Gary Marcus on AI •

Funding Rumors. OpenAI is trying to raise a lot of money, rumored to be $6.5 or $7 billion dollars, apparently on a $150 billion dollar valuation.

data point • 1 month ago • Via Gary Marcus on AI • www.nytimes.com

.

• 1 month ago • Via Simon Willison on Mastodon •

Adobe Video Generation. Adobe adds video generation to Firefly, Anthropic launches AI safety-focused Claude enterprise.

data point • 1 month ago • Via Last Week in AI •

AI Detection Tools. YouTube is developing AI detection tools for music and faces, plus creator controls for AI training.

data point • 1 month ago • Via Last Week in AI • techcrunch.com

AI in Japan. Japan's Sakana AI partners Nvidia for research, raises $100M.

data point • 1 month ago • Via Last Week in AI • cointelegraph.com

Paid Users Milestone. OpenAI Hits 1 Million Paid Users For Business Versions of ChatGPT.

data point • 1 month ago • Via Last Week in AI • www.bloomberg.com

OpenAI Valuation. OpenAI Fundraising Set to Vault Startup's Valuation to $150 Billion.

data point • 1 month ago • Via Last Week in AI • www.bloomberg.com

AI Forecasting Competitors. New AI forecasting bot competes with veteran human forecasters.

data point • 1 month ago • Via Last Week in AI •

LLAMA3 Performance. LLAMA3 8B excels with synthetic tokens, AI-generated ideas deemed more novel.

data point • 1 month ago • Via Last Week in AI •

OpenAI O1 Models. OpenAI's O1 and O1 mini models boast advanced reasoning and longer responses.

data point • 1 month ago • Via Last Week in AI •

AI Bias Issues. AI tends to replicate the biases in your systems, limiting creativity and diversity in content production.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Camus and Rebellion. Albert Camus’s philosophy encourages embracing life for what it is and thus live a richer, more fulfilling life.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Navigating Hyperreality. The path of least resistance leads to passive consumption and an acceptance of superficiality in our interactions with media.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Linguistic Diversity Decline. Our findings reveal a consistent decrease in the diversity of the model outputs through successive iterations.

insight • 1 month ago • Via Artificial Intelligence Made Simple • arxiv.org

Diversity in Content Creation. Large language models have led to a surge in collaborative writing with model assistance, risking decreased diversity in the produced content.

insight • 1 month ago • Via Artificial Intelligence Made Simple • arxiv.org

Critical Thinking Outsourcing. Outsourcing critical thinking to AI systems can lead to reducing individual cognitive engagement and understanding.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Generative AI Limitations. An overreliance on GenAI will lead to people outsourcing their thinking to GPT, reducing their critical thinking abilities.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Information Overload Effects. We live in a world where there is more and more information, and less and less meaning.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Simulacra and Reality. JB argued that our relationship with reality is mediated through signs and symbols, which have become detached from any underlying truth.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

AI in Moderation Issues. The use of AI in Moderation can impose arbitrary standards that limit the distribution of certain kinds of content, and push others, all creating a loss of creativity and critical thinking.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Learning Budget Opportunity. Many companies have a learning budget, and you can expense your subscription through that budget.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple • docs.google.com

.

• 1 month ago • Via Simon Willison on Mastodon •

Comparison to WeWork. Gary Marcus has repeatedly warned that OpenAI might someday be seen as the WeWork of AI.

insight • 1 month ago • Via Gary Marcus on AI •

Investor Caution. Investors shouldn’t be pouring more money at higher valuations, they should be asking what is going on.

recommendation • 1 month ago • Via Gary Marcus on AI •

Valuation Concerns. Yet people are valuing this company at $150 billion dollars.

insight • 1 month ago • Via Gary Marcus on AI •

Absence of Product Releases. GPT-5 hasn’t dropped, Sora hasn’t shipped.

data point • 1 month ago • Via Gary Marcus on AI •

Massive Operating Loss. The company had an operating loss of $5b last year.

data point • 1 month ago • Via Gary Marcus on AI •

Co-founder Departures. From left to right that’s Ilya Sutskever (now gone, less than a year later), Greg Brockman (on leave, at least until the end of the year), CTO Mira Murati (departure just announced) and Sam Altman (fired, and then rehired).

data point • 1 month ago • Via Gary Marcus on AI •

Iconic Magazine Covers. This one, from last September, may soon become just as iconic.

insight • 1 month ago • Via Gary Marcus on AI •

.

• 1 month ago • Via Simon Willison on Mastodon •

.

• 1 month ago • Via Simon Willison on Mastodon •

.

• 1 month ago • Via Simon Willison on Mastodon •

.

• 1 month ago • Via Simon Willison on Mastodon •

Retracted Statement. I am retracting our earlier statement that OpenAI deliberately cherry-picked the medical diagnostic example to make o-1 seem better than it is.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Over-Diagnosing Rare Condition. I also noticed that GPT seems to over-estimate the probability of (and thus over-diagnose) a very rare condition, which is a major flag and must be studied further.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Transparency in Results. Anyone claiming to have a powerful foundation model for these tasks should be sharing their evals.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

OpenAI Responsibility. I think technical solution providers have a duty to make users clearly aware of any limitations upfront.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

High Probability Concerns. Given the low prior probability, I am naturally suspicious of any system that weighs it this highly.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Need for Cross-Validation. I didn’t think I would have to teach OAI folk the importance of cross-validation.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Testing Methodology Issues. B mentioned that they had tested O1 (main) for the prompt a bunch of times. O1 always had the same outputs (KBG).

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Judging OpenAI Performance. OAI promotes the performance of its model without acknowledging massive limitations.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Models Need Clarification. For diagnostic purposes, it is better to provide models that provide probability distributions + deep insights so that doctors can make their own call.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Inconsistent Diagnosis. Running the prompt- making a diagnosis based on a given phenotype profile on ChatGPT o-1 leads to inconsistent diagnosis.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Floating Harbor Syndrome Rarity. The Floating Harbor Syndrome has been recorded in less than 50 people ever.

data point • 1 month ago • Via Artificial Intelligence Made Simple • en.wikipedia.org

Publish Testing Outputs. In the future, I think any group making claims of great performance on Medical Diagnosis must release their testing outputs on this domain.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Caution in Medical Use. This (and its weird probability distributions for diseases) lead me to caution people against using o-1 in Medical Diagnosis.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Luma's Dream Machine API. Text-to-video startup Luma AI has announced an API for its Dream Machine video generation model which allows users to build applications and services using Luma's video generation model.

data point • 1 month ago • Via Last Week in AI • venturebeat.com

California's Deepfake Legislation. Governor Newsom signs bills to combat deepfake election content, including legislation to protect the digital likeness of actors and performers.

recommendation • 1 month ago • Via Last Week in AI • www.gov.ca.gov

White House AI Task Force. White House launches AI data center task force with industry experts to address massive infrastructure needs for artificial intelligence projects.

data point • 1 month ago • Via Last Week in AI • www.datacenterknowledge.com

Lionsgate's AI Ambition. Lionsgate has announced a partnership with Runway to develop an AI model that can generate 'cinematic video' and potentially replace storyboard artists and VFX crews.

data point • 1 month ago • Via Last Week in AI • www.cartoonbrew.com

Runway's API Offerings. Runway, an AI startup that is also focused on video creation, had launched its own API that allows developers to integrate its generative models into third-party platforms, currently offering its Gen-3 Alpha Turbo model with two pricing plans.

data point • 1 month ago • Via Last Week in AI • techcrunch.com

James Earl Jones Controversy. James Earl Jones' decision to use AI to preserve his voice as Darth Vader raises concerns among actors about the potential impact on their work and the need for consent and compensation transparency.

insight • 1 month ago • Via Last Week in AI • www.foxnews.com

AI Reducing Hospital Deaths. AI-based early warning system at St. Michael's Hospital in Toronto, called Chartwatch, has led to a 26% decrease in unexpected deaths among hospitalized patients.

insight • 1 month ago • Via Last Week in AI • www.cbc.ca

Copilot Wave 2 Launch. Microsoft's Copilot AI chatbot, now in its 'Wave 2' phase, enhances productivity in Microsoft 365 apps by enabling collaborative document creation, narrative building in PowerPoint, and intelligent email summarization in Outlook.

insight • 1 month ago • Via Last Week in AI • www.techradar.com

1X Technologies Innovation. Norwegian startup 1X Technologies has developed an AI-based world model to serve as a virtual simulator for training robots, addressing the challenge of reliably evaluating multi-task robots in dynamic environments.

data point • 1 month ago • Via Last Week in AI • www.maginative.com

.

• 1 month ago • Via Simon Willison on Mastodon •

.

• 1 month ago • Via Simon Willison on Mastodon •

OpenAI Testimony. OpenAI whistleblower William Saunders testified that the company has 'repeatedly prioritized speed of deployment over rigor.'

data point • 2 months ago • Via Artificial Intelligence Made Simple • www.c-span.org

Loss of Trust. A single negative encounter can drastically undermine their perception of the AI’s reliability and hinder human-AI collaboration.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Truth in AI. The solution for AI in healthcare is simple: Give clinicians the probabilities of your answers or start developing models that are capable of saying 'I don’t know!'

recommendation • 2 months ago • Via Artificial Intelligence Made Simple •

Benchmark Concerns. Benchmarks are gameable and aren't representative of the complexities found in real-world applications.

insight • 2 months ago • Via Artificial Intelligence Made Simple • societysbackend.com

AgentClinic-MedQA Claims. AgentClinic-MedQA claims Strawberry is the top choice for medical diagnostics.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

o1 Model Claims. What’s fascinating is that, for the first time ever, a foundational model—without any fine-tuning on medical data—is offering a medical diagnosis use case in its new release!

insight • 2 months ago • Via Artificial Intelligence Made Simple •

AI in Healthcare. This might be a viable strategy for a fledgling fintech startup, but it’s reckless and dangerous for a company like OpenAI, especially when they now promote applications in critical areas like medical diagnostics.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Lean Startup Culture. Since its inception, OpenAI has embraced a 'Lean Startup' culture—quickly developing an MVP (Minimum Viable Product), launching it into the market, and hoping something 'sticks.'

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Hallucinations in Diagnosis. The o1 'Strawberry' model rationalizes misdiagnoses, which is simply wrong information, especially in clinical decision-making.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

o1 Model Diagnosis. My verdict: even the best large language models (LLMs) we have are not ready for prime time in healthcare.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Sergei Polevikov Introduction. Today’s guest author is Sergei Polevikov, a Ph.D.-trained mathematician, data scientist, AI entrepreneur, economist, and researcher with over 30 academic manuscripts.

data point • 2 months ago • Via Artificial Intelligence Made Simple •

Learning Budget. Many companies have a learning budget that you can expense this newsletter to.

data point • 2 months ago • Via Artificial Intelligence Made Simple •

Independent Work. I put a lot of effort into creating work that is informative, useful, and independent from undue influence.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Guest Insights. In the series Guests, I will invite these experts to come in and share their insights on various topics that they have studied/worked on.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Chocolate Milk Cult. Our chocolate milk cult has a lot of experts and prominent figures doing cool things.

data point • 2 months ago • Via Artificial Intelligence Made Simple •

Tom Hanks Warning. Tom Hanks warns followers to be wary of 'fraudulent' ads using his likeness through AI.

insight • 2 months ago • Via Last Week in AI • www.nbcnews.com

China's Chip Advancements. China's chip capabilities are reportedly just 3 years behind TSMC, showcasing rapid advancements.

data point • 2 months ago • Via Last Week in AI • asia.nikkei.com

Investment in AI Companies. Ilya Sutskever's startup, Safe Superintelligence, raises $1B, signaling strong investor confidence in AI.

data point • 2 months ago • Via Last Week in AI • techcrunch.com

AI Regulation in California. California's pending AI regulation bill highlights growing governmental interest in AI oversight.

insight • 2 months ago • Via Last Week in AI • www.nytimes.com

AI Training Advances. Advances in training language models with long-context capabilities are emerging in the AI landscape.

insight • 2 months ago • Via Last Week in AI •

OpenAI Hardware Move. OpenAI's move into hardware production is a significant development for the company.

insight • 2 months ago • Via Last Week in AI •

Amazon AI Robotics. Amazon's strategic acquisition in AI robotics is a notable event in the industry.

insight • 2 months ago • Via Last Week in AI •

Fostering Innovation. OSS leads to cheaper, safer, and more accessible products, all benefiting end users.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Diverse Contributor Benefits. OSS attracts a diverse set of contributors, leading to more efficient and innovative solutions.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Invest in Community Building. It is critical for any group to invest in creating a developer-friendly open-source project through comprehensive documentation and community engagement.

recommendation • 2 months ago • Via Artificial Intelligence Made Simple •

Ecosystem Development. Collaborating with other organizations to create integrated AI solutions expands market opportunities.

recommendation • 2 months ago • Via Artificial Intelligence Made Simple •

Training and Support. Providing training and certification in open-source AI frameworks can also generate revenue and build a community of skilled users.

recommendation • 2 months ago • Via Artificial Intelligence Made Simple •

OSS and Innovation. Open-source projects tend to explore more novel directions, lacking the short-term profit motives of traditional companies.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Micro and Macro Impact. OSS is really good at solving big, important problems that affect tons of people.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Benefits of Sharing. Companies that share their software get better street cred, outsource a lot of R&D to people for free, and hook more people into their ecosystem.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Cost Reduction Strategies. Adopting preexisting OS tools allows companies to reduce costs, build more secure systems, and iterate quickly.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

End-User Benefits. End-users benefit from AI-powered applications that are improved through open-source collaboration.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Developer Portfolio Boost. Participation in open-source AI projects enhances career prospects as developers build public portfolios showcasing expertise in a highly competitive field.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Complementary Forces. Open and Closed Software are often complementary forces that are blended together to create a useful end product.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Open Source Investment. Companies invest significantly in open-source software (OSS) for enhanced innovation and competitive advantage.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Learning Budget Support. Many companies have a learning budget that you can expense this newsletter to.

data point • 2 months ago • Via Artificial Intelligence Made Simple • docs.google.com

Reasoning Capabilities. OpenAI describes this release as a 'preview,' highlighting its early-stage nature, and positioning o1 as a significant advancement in reasoning capabilities.

insight • 2 months ago • Via Last Week in AI •

Autonomous AI Agents. 1,000 autonomous AI agents collaborate to build their own society in a Minecraft server, forming a merchant hub and establishing a constitution.

insight • 2 months ago • Via Last Week in AI • www.trendwatching.com

Humanoid Robot Development. A robotics company in Silicon Valley has made significant progress in developing humanoid robots for real-world work scenarios.

data point • 2 months ago • Via Last Week in AI • techcrunch.com

DataGemma Introduction. Google introduces DataGemma, a pair of open-source AI models that address the issue of inaccurate answers in statistical queries.

data point • 2 months ago • Via Last Week in AI • venturebeat.com

Adobe Firefly Milestone. Adobe's Firefly Services, the company's AI-driven innovation, has reached a milestone of 12 billion generations.

data point • 2 months ago • Via Last Week in AI • www.pymnts.com

Runway AI Upgrade. AI video platform RunwayML has introduced a new video-to-video tool in its latest model, Gen-3 Alpha.

data point • 2 months ago • Via Last Week in AI • www.theverge.com

Corporate Structure Change. Sam Altman announced that the company's non-profit corporate structure will undergo changes in the coming year, moving away from being controlled by a non-profit.

data point • 2 months ago • Via Last Week in AI • fortune.com

AI Potential Advancements. These models represent a major leap forward in AI’s problem-solving potential, paving the way for new advancements in fields like medicine, engineering, and advanced coding tasks.

insight • 2 months ago • Via Last Week in AI •

OpenAI o1 Model. OpenAI has introduced this new model as part of a planned series of 'reasoning' models aimed at tackling complex problems more efficiently than ever before.

data point • 2 months ago • Via Last Week in AI • www.theverge.com

API Costs High. For developers, however, it’s worth noting that the model takes much longer to produce outputs and the API costs for o1 are significantly higher than GPT-4o.

data point • 2 months ago • Via Last Week in AI •

Training Approach. What sets o1 apart is its training approach—unlike previous GPT models, which were trained to mimic data patterns, o1 uses reinforcement learning to think through problems, step by step.

insight • 2 months ago • Via Last Week in AI •

Microsoft's Usage Caps. Microsoft's Inflection adds usage caps for Pi, new AI inference services by Cerebrus Systems competing with Nvidia.

insight • 2 months ago • Via Last Week in AI •

AI Advancements. Google's AI advancements with Gemini 1.5 models and AI-generated avatars, along with Samsung's lithography progress.

insight • 2 months ago • Via Last Week in AI •

U.S. Restrictions on China. U.S. gov't tightens China restrictions on supercomputer component sales.

insight • 2 months ago • Via Last Week in AI • www.tomshardware.com

Chinese GPU Access. Chinese Engineers Reportedly Accessing NVIDIA's High-End AI Chips Through Decentralized 'GPU Rental Services'.

insight • 2 months ago • Via Last Week in AI • wccftech.com

Elon Musk's Support. Elon Musk voices support for California bill requiring safety tests on AI models.

insight • 2 months ago • Via Last Week in AI • www.reuters.com

Poll on SB1047. Poll: 7 in 10 Californians Support SB1047, Will Blame Governor Newsom for AI-Enabled Catastrophe if He Vetoes.

data point • 2 months ago • Via Last Week in AI • mailchi.mp

AI Regulation. AI regulation discussions including California's SB1047, China's AI safety stance, and new export restrictions impacting Nvidia's AI chips.

insight • 2 months ago • Via Last Week in AI •

Bias in AI. Biases in AI, prompt leak attacks, and transparency in models and distributed training optimizations, including the 'distro' optimizer.

insight • 2 months ago • Via Last Week in AI •

.

• 2 months ago • Via Simon Willison on Mastodon •

Marcus' Dream. Gary Marcus continue to a dream of day in which AI research doesn’t center almost entirely around LLMs.

insight • 2 months ago • Via Gary Marcus on AI •

Update on Strawberry. OpenAI’s latest, GPT o1, code named Strawberry, came out.

data point • 2 months ago • Via Gary Marcus on AI • x.com

GPT-4 Prediction. “Still flawed, still limited, seem more impressive on first use”. Almost exactly what I predicted we would see with GPT-4, back on Christmas Day 2022.

insight • 2 months ago • Via Gary Marcus on AI •

Synthetic Data Dependence. The new system appears to depend heavily on synthetic data, and that such data may be easier to produce in some domains (such as those in which o1 is most successful, like some aspects of math) than others.

insight • 2 months ago • Via Gary Marcus on AI •

Altman's AGI Stance. Altman had, much to my surprise, just echoed my longstanding position that current techniques alone would not be enough to get to AGI.

insight • 2 months ago • Via Gary Marcus on AI •

Content Recommendations. I figured I’d start sharing whatever AI Papers/Publications, interesting books, videos, etc I came across each week.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

AI Summary Study. The reviewers’ overall feedback was that they felt AI summaries may be counterproductive and create further work because of the need to fact-check and refer to original submissions.

insight • 2 months ago • Via Artificial Intelligence Made Simple • www.crikey.com.au

Green Powders Marketing. Good video on the misleading marketing behind Green Powders.

insight • 2 months ago • Via Artificial Intelligence Made Simple • youtu.be

Roaring Bitmaps Impact. By storing these indices as Roaring bitmaps, we are able to easily evaluate typical boolean filters efficiently, reducing latencies by 500 orders of magnitude.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

AI Adoption Barriers. Until the liabilities and responsibilities of AI models for medicine are clearly spelled out via regulation or a ruling, the default assumption of any doctor is that if AI makes an error, the doctor is liable for that error, not the AI.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

AI in Clinical Diagnosis. Doctors bear a lot of risk for using AI, while model developers don’t.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Freedom of Speech Analysis. Tobias Jensen discusses content moderation on social media platforms and recent cases which trend towards preventing the harms that can (and has) been caused by social media messages not being regulated properly.

insight • 2 months ago • Via Artificial Intelligence Made Simple • futuristiclawyer.com

Highlighting Important Works. I’m going to highlight only two since they bring up extremely important discussions, and I want to get your opinions on them.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Next Planned Articles. Boeing, DEI, and 9 USD Engineers.

data point • 2 months ago • Via Artificial Intelligence Made Simple •

Survey Participation. Fred Graver is looking into understanding the demand for content around AI and is asking people to fill out a survey.

recommendation • 2 months ago • Via Artificial Intelligence Made Simple • www.reddit.com

Community Engagement. We started an AI Made Simple Subreddit.

data point • 2 months ago • Via Artificial Intelligence Made Simple • www.reddit.com

California AI Bill. The controversial California bill SB 1047, aimed at preventing AI disasters, has passed the state's Senate and is now awaiting Governor Gavin Newsom's decision.

data point • 2 months ago • Via Last Week in AI • www.nytimes.com

Waymo Collision Data. Waymo's driverless cars have been involved in fewer injury-causing crashes per million miles of driving than human-driven vehicles.

data point • 2 months ago • Via Last Week in AI • www.understandingai.org

AI Image Creation. AI has led to the creation of over 15 billion images since 2022, with an average of 34 million images being created per day.

data point • 2 months ago • Via Last Week in AI • journal.everypixel.com

Global AI Treaty. US, EU, and UK sign the world's first international AI treaty, emphasizing human rights and democratic values as key to regulating public and private-sector AI models.

data point • 2 months ago • Via Last Week in AI • cointelegraph.com

Music Producer Arrested. Music producer arrested for using AI and bots to boost streams and generate AI music, facing charges of money laundering and wire fraud.

insight • 2 months ago • Via Last Week in AI • www.edmtunes.com

AI in Healthcare. Google DeepMind has launched AlphaProteo, an AI system that generates novel proteins to accelerate research in drug design, disease understanding, and health applications.

data point • 2 months ago • Via Last Week in AI • analyticsindiamag.com

Ilya Sutskever Funding. Safe Superintelligence (SSI), an AI startup co-founded by Ilya Sutskever, has successfully raised over $1 billion in funding.

data point • 2 months ago • Via Last Week in AI • techcrunch.com

OpenAI AI Chips. OpenAI is reportedly planning to build its own AI chips using TSMC's forthcoming 1.6nm A16 process node, according to United Daily News.

data point • 2 months ago • Via Last Week in AI • www.yahoo.com

iPhone 16 Launch. Apple has unveiled its iPhone 16 line, which includes the iPhone 16, iPhone 16 Plus, iPhone 16 Pro, and iPhone 16 Pro Max, all designed with the Apple Intelligence mind.

data point • 2 months ago • Via Last Week in AI • finance.yahoo.com

AI Impacts on Society. AI is likely to change the world in coming years, affecting virtually every aspect of society, from employment to education to healthcare to national defense.

insight • 2 months ago • Via Gary Marcus on AI •

Future Responsibility. It will be our fault if candidates don’t address AI policy; they certainly aren’t going to bother to talk about it if we don’t let them know it matters.

insight • 2 months ago • Via Gary Marcus on AI •

Call for Clarity. In an ideal world, moderators would demand clarity on candidates' policies around AI.

recommendation • 2 months ago • Via Gary Marcus on AI •

Vulnerability of Teens. Nonconsensual deep fake porn may especially affect the already vulnerable population of teenage girls, who have been harmed by social media.

insight • 2 months ago • Via Gary Marcus on AI •

AI Policy Neglect. A total neglect of AI policy would be deeply unfortunate; our long-term future may actually be shaped more by AI policy than tariffs.

insight • 2 months ago • Via Gary Marcus on AI •

Candidates' AI Plans. It would be a really good time to demand better [AI policies] from candidates; if we don’t, future generations may regret it.

recommendation • 2 months ago • Via Gary Marcus on AI •

Foundation Model Size. Aurora is a 1.3 Billion Foundation Model for environmental forecasting.

data point • 2 months ago • Via Artificial Intelligence Made Simple • www.microsoft.com

Predictive Modeling Framework. The authors have created a fine-tuning process that allows Aurora to excel at both short-term and long-term predictions.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Replay Buffer Mechanism. Aurora implements a replay buffer, allowing the model to learn from its own predictions, improving long-term stability.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Energy-Efficient Fine-Tuning. LoRA introduces small, trainable matrices to the attention layers, allowing Aurora to fine-tune efficiently while significantly reducing memory usage.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Variable Weighting Methodology. Aurora uses variable weighting, where different weights are assigned to different variables in the loss function to balance their contributions.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Rollout Fine-tuning Importance. Rollout fine-tuning addresses the challenge by training Aurora on sequences of multiple predictions, simulating the chain reaction of weather events over time.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Training with MAE. Mean Absolute Error (MAE) is used as the training objective, which is robust to outliers.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

U-Net Architecture. The U-Net architecture allows for multi-scale processing, enabling the model to simultaneously understand local weather patterns and larger-scale atmospheric phenomena.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Swin Transformer Benefits. Swin Transformers excel at capturing long-range dependencies and scaling to large datasets, which is crucial for weather modeling.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Impact of Underreporting. Aurora got almost no attention, indicating a serious misplacement of priorities in the AI Community.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Community Awareness Gap. The ability of foundation models to excel at downstream tasks with scarce data could democratize access to accurate weather and climate information in data-sparse regions, such as the developing world and polar regions.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Sandstorm Prediction. Aurora was able to predict a vicious sandstorm a day in advance, which can be used in the future for evacuations and disaster planning.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Limited Data Handling. Aurora leverages the strengths of the foundation modelling approach to produce operational forecasts for a wide variety of atmospheric prediction problems, including those with limited training data, heterogeneous variables, and extreme events.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Advanced Predictive Capabilities. In under a minute, Aurora produces 5-day global air pollution predictions and 10-day high-resolution weather forecasts that outperform state-of-the-art classical simulation tools and the best specialized deep learning models.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

.

• 2 months ago • Via Simon Willison on Mastodon •

.

• 2 months ago • Via Simon Willison on Mastodon •

Energy Demand Increase. Demand is increasing, and the question is what bottlenecks will be alleviated to fulfill that demand.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Long-Term Value Creation. Value will be created in unforeseen ways.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Infrastructure Creation. AI application will not generate a net positive ROI on infrastructure buildout for some time.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

AI Model Revenues. Our best indication of AI app revenue comes from model revenue (OpenAI at an estimated $1.5B in API revenue).

data point • 2 months ago • Via Artificial Intelligence Made Simple •

Data Center Demand. Theoretically, value should flow through the traditional data center value chain.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

AI Total Expenditures. The cloud revenue gives us the real indication of how much value is being invested into AI applications.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

AI Application Revenue. AI applications have generated a very rough estimate of $20B in revenue with multiples higher than that in value creation so far.

data point • 2 months ago • Via Artificial Intelligence Made Simple •

Nvidia Revenue. Last quarter, Nvidia did $26.3B in data center revenue, with $3.7B of that coming from networking.

data point • 2 months ago • Via Artificial Intelligence Made Simple •

Power Scarcity. They’ll do this themselves or through a developer like QTS, Vantage, or CyrusOne.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Compute Power Concerns. All three hyperscalers noted they’re capacity-constrained on AI compute power.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Application Value. ROI on AI will ultimately be driven by application value to end users.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Hyperscaler Decisions. Hyperscalers are making the right CapEx business decisions.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

No Clear ROI. There’s not a clear ROI on AI investments right now.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

AI ROI Debate. For the first time in a year and a half, common opinion is now shifting to the narrative 'Hyperscaler spending is crazy. AI is a bubble.'

insight • 2 months ago • Via Artificial Intelligence Made Simple •

CapEx Growth. Amazon, Google, Microsoft, and Meta have spent a combined $177B on capital expenditures over the last four quarters.

data point • 2 months ago • Via Artificial Intelligence Made Simple •

100K Readers. Help me democratize the most important ideas in AI Research and Engineering to over 100K readers weekly.

data point • 2 months ago • Via Artificial Intelligence Made Simple •

Learning Budget. Many companies have a learning budget that you can expense this newsletter to.

data point • 2 months ago • Via Artificial Intelligence Made Simple •

Expert Invitations. In the series Guests, I will invite these experts to come in and share their insights on various topics that they have studied/worked on.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Chocolate Milk Cult. Our chocolate milk cult has a lot of experts and prominent figures doing cool things.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

.

• 2 months ago • Via Simon Willison on Mastodon •

New Paradigms in NLP. Sebastian Raschka discusses recent pre-training and post-training paradigms in NLP models, highlighting significant new techniques.

insight • 2 months ago • Via Artificial Intelligence Made Simple • magazine.sebastianraschka.com

LLM Performance Restrictions. Imposing formatting restrictions on LLMs leads to performance degradation, impacting reasoning abilities significantly.

insight • 2 months ago • Via Artificial Intelligence Made Simple • arxiv.org

Risks of Synthetic Training. Training language models on synthetic data leads to a consistent decrease in the diversity of the model outputs through successive iterations.

insight • 2 months ago • Via Artificial Intelligence Made Simple • arxiv.org

Standardizing Text Diversity. This work empirically investigates diversity scores on English texts and provides a diversity score package to facilitate research.

insight • 2 months ago • Via Artificial Intelligence Made Simple • arxiv.org

Impact of LLMs on Diversity. Writing with InstructGPT results in a statistically significant reduction in diversity.

insight • 2 months ago • Via Artificial Intelligence Made Simple • arxiv.org

Dimension Insensitive Metric. This paper introduces the Dimension Insensitive Euclidean Metric (DIEM) which demonstrates superior robustness and generalizability across dimensions.

insight • 2 months ago • Via Artificial Intelligence Made Simple • arxiv.org

Previews of Articles. Upcoming articles include 'The Economics of ESports' and 'The economics of Open Source.'

data point • 2 months ago • Via Artificial Intelligence Made Simple •

Notable Content Creator. Artem Kirsanov produces high-quality videos on computational neuroscience and AI, and offers very new ideas/perspectives for traditional Machine Learning people.

insight • 2 months ago • Via Artificial Intelligence Made Simple • www.youtube.com

Community Engagement. We started an AI Made Simple Subreddit.

data point • 2 months ago • Via Artificial Intelligence Made Simple • www.reddit.com

AI Content Focus. The focus will be on AI and Tech, but ideas might range from business, philosophy, ethics, and much more.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Support for Writing. Doing so helps me put more effort into writing/research, reach more people, and supports my crippling chocolate milk addiction.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

.

• 2 months ago • Via Simon Willison on Mastodon •

.

• 2 months ago • Via Simon Willison on Mastodon •

Authors' Lawsuit. Authors sue Claude AI chatbot creator Anthropic for copyright infringement.

insight • 2 months ago • Via Last Week in AI • abcnews.go.com

California AI Bill Weakening. California weakens bill to prevent AI disasters before final vote, taking advice from Anthropic.

insight • 2 months ago • Via Last Week in AI • techcrunch.com

Anysphere Funding. Anysphere, a GitHub Copilot rival, has raised $60M Series A at $400M valuation from a16z, Thrive, sources say.

insight • 2 months ago • Via Last Week in AI • techcrunch.com

OpenAI's New Deal. Ars Technica content is now available in OpenAI services.

insight • 2 months ago • Via Last Week in AI • arstechnica.com

AMD Acquisition. AMD buying server maker ZT Systems for $4.9 billion as chipmakers strengthen AI capabilities.

insight • 2 months ago • Via Last Week in AI • abcnews.go.com

California Regulation. Analysis of California's AI regulation bill SB1047 and legal issues related to synthetic media, copyright, and online personhood credentials.

insight • 2 months ago • Via Last Week in AI •

AI Model Scaling. Exploration of the feasibility and investment needed for scaling advanced AI models like GPT-4 and Agent Q architecture enhancements.

insight • 2 months ago • Via Last Week in AI •

Perplexity Updates. Perplexity's integration of Flux image generation models and code interpreter updates for enhanced search results.

insight • 2 months ago • Via Last Week in AI •

New AI Features. Ideogram AI's new features, Google's Imagine 3, Dream Machine 1.5, and Runway's Gen3 Alpha Turbo model advancements.

insight • 2 months ago • Via Last Week in AI •

Episode Summary. Our 180th episode with a summary and discussion of last week's big AI news!

insight • 2 months ago • Via Last Week in AI •

.

• 2 months ago • Via Simon Willison on Mastodon •

.

• 2 months ago • Via Simon Willison on Mastodon •

.

• 2 months ago • Via Simon Willison on Mastodon •

Social Media Trends. Advice for content creators often revolves around imitating successful content rather than fostering unique voices, contributing to conformity.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Conformity in Media. Social media and content creation platforms, initially designed for authentic expression, often lead to a relentless drive toward sameness and conformity.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Democracy and Conformity. Tocqueville observed that democratic societies foster a sense of equality among citizens, which can lead to pressure for conformity, homogenizing thought, expression, and behavior.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Need for Critical Diversity. When people lose exposure to diverse viewpoints, their capacity to visualize alternatives diminishes, reinforcing conformity.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Intellectual Homogeneity. A populace that is intellectually homogenous tends to rely on external sources for solutions, sacrificing personal agency and responsibility.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Over-Reliance on Institutions. Tocqueville noticed a tendency for citizens to increasingly rely on the government under the expectation that an elected government should solve societal problems.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Collective Action Importance. The OSS movement in tech allows people to find their communities and contribute, emphasizing the importance of collective small contributions leading to significant shifts.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Voluntary Associations. Tocqueville noted that Americans constantly form associations for various purposes, which serve as a powerful tool for collective action and public benefit.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Local Community Power. Tocqueville saw voluntary organizations and local community groups as crucial to counterbalance the negative tendencies of democracy.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Tyranny of the Majority. In modern democracies, tyranny manifests through social ostracism rather than physical oppression, leading to self-censorship and a society of self-oppressors.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Agency and Accountability. Tocqueville emphasizes the importance of people accepting agency and accountability for their information diet instead of relying on institutions.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Mental Health and Misinformation. We cry for the government or social media companies to do something about worsening mental health and the spread of misinformation, but how many of us have acted positively on these platforms?

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Personal Responsibility. We often expect institutions to make systemic changes without acknowledging the importance of individual responsibility in taking actions that lead to systemic change.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

.

• 2 months ago • Via Simon Willison on Mastodon •

AI Ethical Concerns. Google DeepMind employees are urging the company to end military contracts due to concerns about AI technology used for warfare.

insight • 2 months ago • Via Last Week in AI • www.theverge.com

Open-Source AI Definition. Open-source AI is defined as a system that can be used, inspected, modified, and shared without restrictions.

insight • 2 months ago • Via Last Week in AI • www.technologyreview.com

Authors Sue Anthropic. Authors are suing AI startup Anthropic for using pirated texts to train its chatbot Claude, alleging large-scale theft.

insight • 2 months ago • Via Last Week in AI • abcnews.go.com

AI in Ad Creation. Creatopy, which automates ad creation using AI, has raised $10 million and now serves over 5,000 brands and agencies.

insight • 2 months ago • Via Last Week in AI • techcrunch.com

Google's AI Image Generator. Google has released a powerful AI image generator, Imagen 3, for free use in the U.S., outperforming other models.

insight • 2 months ago • Via Last Week in AI • petapixel.com

Content Partnership. OpenAI has partnered with Condé Nast to display content from its publications within AI products like ChatGPT and SearchGPT.

insight • 2 months ago • Via Last Week in AI • arstechnica.com

OpenAI's Regulatory Stance. OpenAI has opposed the proposed AI bill SB 1047 aimed at implementing safety measures, despite public support for regulation.

insight • 2 months ago • Via Last Week in AI • www.windowscentral.com

California AI Regulation. Anthropic's CEO supports California's AI bill SB 1047, stating the benefits outweigh the costs, despite some concerns.

insight • 2 months ago • Via Last Week in AI • www.pcmag.com

AI for Coding Tasks. Open source Dracarys models are specifically designed to optimize coding tasks and significantly improve performance of existing models.

insight • 2 months ago • Via Last Week in AI • venturebeat.com

Advanced Long-Context Models. AI21's Jamba 1.5 Large model has demonstrated superior performance in latency tests against similar models.

insight • 2 months ago • Via Last Week in AI • finance.yahoo.com

Outperforming Competitors. Microsoft's Phi-3.5 outperforms other small models from Google, OpenAI, Mistral, and Meta on several key metrics.

insight • 2 months ago • Via Last Week in AI • www.tomsguide.com

Efficient Small Models. Nvidia's Llama-3.1-Minitron 4B performs comparably to larger models while being more efficient to train and deploy.

insight • 2 months ago • Via Last Week in AI • venturebeat.com

.

• 2 months ago • Via Simon Willison on Mastodon •

.

• 2 months ago • Via Simon Willison on Mastodon •

.

• 2 months ago • Via Simon Willison on Mastodon •

.

• 2 months ago • Via Simon Willison on Mastodon •

.

• 2 months ago • Via Simon Willison on Mastodon •

Optimizations in Distance Measurement. FINGER significantly outperforms existing acceleration approaches and conventional libraries by 20% to 60% across different benchmark datasets.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Integration of Graph-Based Indexes. Given that we’re already working on graphs, another promising direction for us has been integrating graph-based indexes and search.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

User Verification. By letting our users both verify and edit each step of the AI process, we let them make the AI adjust to their knowledge and insight, instead of asking them to change for the tool.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Focus on Transparency. Model transparency is crucial as a few trigger words/phrases can change the meaning/implication of a clause; users need to have complete insight into every step of the process.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Leveraging Control Tokens. We use control tokens, which are special tokens to indicate different types of elements, enhancing our tokenization process.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Flexible Indexing Approach. Updating the indexes with new information is much cheaper than retraining your entire AI model. Index-based search also allows us to see which chunks/contexts the AI picks to answer a particular query.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Hallucinations in AI. Type 1 Hallucinations are not a worry because our citations are guaranteed to be from the data source, and Type 2 Hallucinations will be reduced significantly through our unique process of constant refinement.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Focus on User Feedback. Our unique approach to involving the user in the generation process leads to a beautiful pair of massive wins against Hallucinations.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Reducing Costs. Relying on a smaller, Mixture of experts style setup instead of letting bigger models do everything reduces our costs dramatically, allowing us to do more with less.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Flexibility in Architecture. The best architecture is useless if it can't fit into your client's processes. Being Lawyer-Led, IQIDIS understands the importance of working within a lawyer's/firm's workflow.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

KI-RAG Challenges. Building KI-RAG systems requires a lot more handling and constant maintenance, making them more expensive than traditional RAG.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Handling Legal Nuances. There is a lot of nuance to Law. Laws can change between regions, different sub-fields weigh different factors, and a lot of law is done in the gray areas.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

High Cost of Mistakes. A mistake can cost a firm millions of dollars in settlements and serious loss of reputation. This high cost justifies the investment into better tools.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Importance of RAG. RAG is one of the most important use-cases for LLMs, and the goal is to build the best RAG systems possible.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Cost of Legal Expertise. Legal Expertise is expensive. If a law firm can cut down the time required for a project by even a few hours, they are already looking at significant savings.

data point • 2 months ago • Via Artificial Intelligence Made Simple •

Need for Higher Adaptability. Building upon this is a priority after our next round of fund-raising (or for any client that specifically requests this).

recommendation • 2 months ago • Via Artificial Intelligence Made Simple •

End User Engagement. Users can inspect multiple alternative paths to verify the quality of secondary/tertiary relationships.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Obsession with User Feedback. I’d be lying if I said that there is one definitive approach (or that what we’ve done is absolutely the best approach).

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Machine Learning in Legal Domain. These are the main aspects of the text-based search/embedding that are promising based on research and our own experiments.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Governance Concerns. Internal governance is key; it shouldn't be just one person at the top of one company calling the shots for all humanity.

insight • 2 months ago • Via Gary Marcus on AI •

Legislation Improvement. Saunders did not think SB-1047 was perfect but says the proposed legislation was the best attempt I've seen to provide a check on this power.

insight • 2 months ago • Via Gary Marcus on AI •

Employee Discontent. Promises have been made and not kept; they lost faith in Altman personally, and have lost faith in the company's commitment to AI safety.

insight • 2 months ago • Via Gary Marcus on AI •

Power Corrupts. If we don't figure out the governance problem, internal and external, before the next big AI advance, we could be in serious trouble.

insight • 2 months ago • Via Gary Marcus on AI •

Timelines for AGI. Saunders thinks it is at least somewhat plausible we will see AGI in a few years; I do not.

insight • 2 months ago • Via Gary Marcus on AI •

Need for Regulation. If OpenAI (and others in Silicon Valley) succeed in torpedoing SB-1047, self-regulation is in many ways what we will be left with.

insight • 2 months ago • Via Gary Marcus on AI •

Call for Accountable Power. Saunders described as a metaprinciple, 'Don't give power to people or structures that can't be held accountable.'

insight • 2 months ago • Via Gary Marcus on AI •

OpenAI's Opposition. OpenAI has just announced that it is opposed to California's SB-1047 despite Altman's public support for AI regulation at the Senate.

insight • 2 months ago • Via Gary Marcus on AI • www.theverge.com

Future Whistleblower Protections. One of the most important reasons for passing SB-1047 in California was its whistleblower protections.

insight • 2 months ago • Via Gary Marcus on AI • digitaldemocracy.calmatters.org

External Oversight Needed. There should be a role for external governance, as well: companies should not be able to make decisions of potentially enormous magnitude on their own.

insight • 2 months ago • Via Gary Marcus on AI •

.

• 2 months ago • Via Simon Willison on Mastodon •

AI Codec Proposal. Using canonical codec representations like JPEG, this article proposes a method to directly model images and videos as compressed files, showing its effectiveness in image generation.

recommendation • 2 months ago • Via Last Week in AI • arxiv.org

Deepfake Scams. Elderly retiree loses over $690,000 to digital scammers using AI-powered deepfake videos of Elon Musk to promote fraudulent investment opportunities.

insight • 2 months ago • Via Last Week in AI • www.nytimes.com

Procreate Stance. Procreate vows to never incorporate generative AI into its products, taking a stand against the technology.

data point • 2 months ago • Via Last Week in AI • techcrunch.com

US AI Lead. US leads in AI investment and job postings, surpassing China and other countries.

insight • 2 months ago • Via Last Week in AI • www.foxnews.com

AI Image Licensing. OpenAI CEO's warning about the use of copyrighted content in AI models is highlighted as Anthropic faces a lawsuit for training its Claude AI model using authors' work without consent.

insight • 2 months ago • Via Last Week in AI • www.windowscentral.com

AI Risks Repository. MIT researchers release a comprehensive AI risk repository to guide policymakers and stakeholders in understanding and addressing the diverse and fragmented landscape of AI risks.

data point • 2 months ago • Via Last Week in AI • techcrunch.com

Research Automation Phases. The AI Scientist operates in three phases: idea generation, experimental iteration, and paper write-up.

insight • 2 months ago • Via Last Week in AI • www.marktechpost.com

AI Scientist Development. "The AI Scientist" is a novel AI system designed to automate the entire scientific research process.

data point • 2 months ago • Via Last Week in AI • www.marktechpost.com

AI Artist Claim Approved. The judge allowed a copyright claim against DeviantArt, which used a model based on Stable Diffusion.

insight • 2 months ago • Via Last Week in AI • www.theverge.com

Lawsuit Progress. The lawsuit against AI companies Stability and Midjourney, filed by a group of artists alleging copyright infringement, has gained traction as Judge William Orrick approved additional claims.

insight • 2 months ago • Via Last Week in AI • www.theverge.com

Conversational Features. Gemini Live can also interpret video in real time and function in the background or when the phone is locked.

recommendation • 2 months ago • Via Last Week in AI • www.theverge.com

Gemini Live Introduction. Google has introduced a new voice chat mode for its AI assistant, Gemini, named Gemini Live.

data point • 2 months ago • Via Last Week in AI • www.theverge.com

AI-driven Features. The company plans to deploy Grok-2 and Grok-2 mini in AI-driven features on X, including improved search capabilities, post analytics, and reply functions.

recommendation • 2 months ago • Via Last Week in AI • techcrunch.com

Image Tolerance. Compared to other image generators on the market, the model is far more permissive with regards to what images it can generate.

insight • 2 months ago • Via Last Week in AI • www.theverge.com

Image Generation Capabilities. Grok has also integrated FLUX.1 by Black Forest Labs to enable users to generate images.

data point • 2 months ago • Via Last Week in AI • www.theverge.com

Premium Access. Access to Grok is currently limited to Premium and Premium+ users.

insight • 2 months ago • Via Last Week in AI • techcrunch.com

Grok-2 Release. Elon Musk's company, X, has launched Grok-2 and Grok-2 mini in beta, both of which are AI models capable of generating images on the X social network.

data point • 2 months ago • Via Last Week in AI • techcrunch.com

Research Engineer Openings. Haize Labs is looking for research scientists to join their teams based in NYC.

data point • 2 months ago • Via Artificial Intelligence Made Simple • job-boards.greenhouse.io

Shoutout.io Page. Shoutout.io is a very helpful tool that allows independent creators to gather testimonials in one place.

data point • 2 months ago • Via Artificial Intelligence Made Simple • redirect.medium.systems

Case Study Articles. I’d like to do more case-study-style articles, where we look into different organizations to study how they solved their business/operational challenges with AI.

recommendation • 2 months ago • Via Artificial Intelligence Made Simple •

Guest Posts Initiative. I want to integrate more guest posts in this newsletter to cover a greater variety of topics and hear from experts across the board.

recommendation • 2 months ago • Via Artificial Intelligence Made Simple •

Encouragement to Apply. We encourage you to apply even if you do not believe you meet every single qualification: We’re open to considering a wide range of perspectives and experiences.

insight • 2 months ago • Via Artificial Intelligence Made Simple •

Prompt Caching Launch. Prompt Caching is Now Available on the Anthropic API for Specific Claude Models.

data point • 2 months ago • Via Last Week in AI • www.marktechpost.com

AI Search Evolution. Google's AI-generated search summaries change how they show their sources.

data point • 2 months ago • Via Last Week in AI • www.theverge.com

Risks of Unaligned AI. Overview of potential risks of unaligned AI models and skepticism around SingularityNet's AGI supercomputer claims.

insight • 2 months ago • Via Last Week in AI •

Huawei's AI Chip. Huawei's Ascend 910C AI chip aims to rival NVIDIA's H100 amidst US export controls.

data point • 2 months ago • Via Last Week in AI •

Grok 2 Beta Release. Grok 2's beta release features new image generation using Black Forest Labs' tech.

data point • 2 months ago • Via Last Week in AI •

Google Voice Chat Feature. Google introduces Gemini Voice Chat Mode available to subscribers and integrates it into Pixel Buds Pro 2.

data point • 2 months ago • Via Last Week in AI •

Deepfake Scams. How ‘Deepfake Elon Musk’ Became the Internet's Biggest Scammer.

data point • 2 months ago • Via Last Week in AI • www.nytimes.com

FCC AI Robocall Rules. FCC Proposes New Rules on AI-Powered Robocalls.

data point • 2 months ago • Via Last Week in AI • www.pymnts.com

MIT AI Risks Repository. MIT researchers release a repository of AI risks.

data point • 2 months ago • Via Last Week in AI • techcrunch.com

Popular AI Search Startup. Perplexity's popularity surges as AI search start-up takes on Google.

data point • 2 months ago • Via Last Week in AI • www.ft.com

Regulatory Fight. Most or all of the major big tech companies joined a lobbying organization that fought SB-1047, despite broad public support for the bill.

data point • 3 months ago • Via Gary Marcus on AI •

Innovative Balance. Passing SB-1047 may normalize the regulation of AI while allowing for continued innovation, showing that safety precautions are compatible with industry growth.

insight • 3 months ago • Via Gary Marcus on AI •

Need for Federal Legislation. Future state and federal efforts may suffer if the bill doesn't pass, showing that comprehensive regulatory efforts are needed at all levels.

recommendation • 3 months ago • Via Gary Marcus on AI •

Comprehensive Approach Needed. We need a comprehensive approach to AI regulation, as SB 1047 is just a start in addressing various risks associated with AI.

recommendation • 3 months ago • Via Gary Marcus on AI •

Whistleblower Protections. The bill provides important whistleblower protections, which are critical for transparency and accountability in AI companies.

insight • 3 months ago • Via Gary Marcus on AI •

Deterrent Value. SB-1047's strongest utility may come as a deterrent, clarifying that the duty to take reasonable care applies to AI developers.

insight • 3 months ago • Via Gary Marcus on AI •

Weak Assurance. The 'reasonable care' standard may be too weak, as billion-dollar companies might exploit it without facing meaningful consequences.

insight • 3 months ago • Via Gary Marcus on AI •

Narrow Focus. SB 1047 seems heavily skewed toward addressing hypothetical existential risks while largely ignoring demonstrable AI risks like misinformation and discrimination.

insight • 3 months ago • Via Gary Marcus on AI •

Legal Standards. The new form of SB 1047 can basically only be used after something really bad happens, as a tool to hold companies liable, rather than prevent risks.

insight • 3 months ago • Via Gary Marcus on AI •

Bill Weakened. California's SB-1047 was significantly weakened in last-minute negotiations, affecting its ability to address catastrophic risks.

insight • 3 months ago • Via Gary Marcus on AI •

High Subscription Importance. Help me democratize the most important ideas in AI Research and Engineering to over 100K readers weekly.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

AI Adoption by Courts. The Attorney General's Office of São Paulo adopted GPT-4 last year to speed up the screening and reviewing process of lawsuits.

data point • 3 months ago • Via Artificial Intelligence Made Simple • news.microsoft.com

Cautious AI Implementation. Hallucination risks and security and data confidentiality concerns call for tremendous caution and common sense when using and implementing AI tools.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

Impact on Legal Services. Legal copilots will inevitably drive down the price of legal services and make legal knowledge more accessible to non-lawyers.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

Legal AI Tools' Future. The legal copilots that will succeed should be developed and branded with a focus on time-savings and productivity benefits.

recommendation • 3 months ago • Via Artificial Intelligence Made Simple •

Changing Nature of Legal Work. AI-driven tools will take care of routine, monotone tasks so lawyers can focus more on the strategic, high-value work.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

AI Use in Legal Sector. 73% of 700 lawyers planned to utilize generative AI in their legal work within the next year.

data point • 3 months ago • Via Artificial Intelligence Made Simple •

AI Speed vs Court Speed. High tech runs three-times faster than normal businesses, and the government runs three times slower than normal businesses.

data point • 3 months ago • Via Artificial Intelligence Made Simple •

Access to Justice Correlation. We can find a strong correlation between the fairness and independence of the court system and the general life quality and well-being of its populace.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

Judicial System's Importance. The court system undertakes a vitally important function in society as a central governance mechanism.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

Experts in Chocolate Milk. Our chocolate milk cult has a lot of experts and prominent figures doing cool things.

data point • 3 months ago • Via Artificial Intelligence Made Simple •

GPT-5 Not Released. And no, GPT-5 did not drop this week as many had hoped.

insight • 3 months ago • Via Gary Marcus on AI •

Expectations Reframing. At the very least, I foresee a significant reframing of expectations.

insight • 3 months ago • Via Gary Marcus on AI •

AI Winter Speculation. As for whether there is an AI winter coming, time will tell.

insight • 3 months ago • Via Gary Marcus on AI •

Thoughts on Regulation. My thoughts on regulation are of course coming soon, in my next book (Taming Silicon Valley, now available for pre-order).

recommendation • 3 months ago • Via Gary Marcus on AI •

Differing Views. Interesting to see where his take and mine differ.

insight • 3 months ago • Via Gary Marcus on AI • x.com

Audio Version Available. There is also an audio only version, here.

data point • 3 months ago • Via Gary Marcus on AI • podcasters.spotify.com

Keynote Video. Here’s the video (well-produced by Machine Learning Street Talk (MLST) of a talk I gave on Friday, as a keynote at AGI-Summit 24.

data point • 3 months ago • Via Gary Marcus on AI • agi-conf.org

.

• 3 months ago • Via Simon Willison on Mastodon •

Google Antitrust Ruling. Google Monopolized Search Through Illegal Deals, Judge Rules.

recommendation • 3 months ago • Via Last Week in AI • www.bloomberg.com

California AI Bill Impact. 'The Godmother of AI' says California's well-intended AI bill will harm the U.S. ecosystem.

recommendation • 3 months ago • Via Last Week in AI • fortune.com

New Humanoid Robot. Figure's new humanoid robot leverages OpenAI for natural speech conversations.

recommendation • 3 months ago • Via Last Week in AI • techcrunch.com

UK Merger Probe. Amazon faces UK merger probe over $4B Anthropic AI investment.

recommendation • 3 months ago • Via Last Week in AI • cointelegraph.com

OpenAI Co-founder Exit. OpenAI co-founder Schulman leaves for Anthropic, Brockman takes extended leave.

recommendation • 3 months ago • Via Last Week in AI • techcrunch.com

Adept AI Returns. Investors in Adept AI will be paid back after Amazon hires startup's top talent.

recommendation • 3 months ago • Via Last Week in AI • www.semafor.com

Character.AI Founders. Google's hiring of Character.AI's founders is the latest sign that part of the AI startup world is starting to implode.

recommendation • 3 months ago • Via Last Week in AI • fortune.com

Compute Efficiency Research. Research advancements such as Google's compute-efficient inference models and self-compressing neural networks, showcasing significant reductions in compute requirements while maintaining performance.

data point • 3 months ago • Via Last Week in AI •

Humanoid Robotics Advances. Rapid advancements in humanoid robotics exemplified by new models from companies like Figure in partnership with OpenAI, achieving amateur-level human performance in tasks like table tennis.

data point • 3 months ago • Via Last Week in AI •

OpenAI Changes. OpenAI's dramatic changes with co-founder exits, extended leaves, and new lawsuits from Elon Musk.

data point • 3 months ago • Via Last Week in AI •

Personnel Movements. Notable personnel movements and product updates, such as Character.ai leaders joining Google and new AI features in Reddit and Audible.

data point • 3 months ago • Via Last Week in AI •

.

• 3 months ago • Via Simon Willison on Mastodon •

Adversarial Perturbations Explained. Adversarial perturbations (AP) are subtle changes to images that can deceive AI classifiers by causing misclassification.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

Mass Surveillance Impact. A recent study in The Quarterly Journal of Economics suggests that fewer people protest when public safety agencies acquire AI surveillance software to complement their cameras.

data point • 3 months ago • Via Artificial Intelligence Made Simple • academic.oup.com

Multi-modal AI Concerns. Despite the potential of multi-modal AI, there is worry regarding its use in mass surveillance and automated weapon systems.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

Emerging Adversarial Techniques. Transferability of adversarial examples between models and query-based attacks are vital strategies for black-box settings.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

Evolutionary Strategies Potential. Evolutionary algorithms, such as genetic algorithms and differential evolution, show promise for generating adversarial perturbations.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

Norm Considerations in Perturbation. Different norms (L1, L2, and L-infinity) significantly impact the outcome and effectiveness of adversarial perturbations.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

Robust Features Importance. Training on just Robust Features leads to good results, suggesting a generalized extraction of robust features is a valuable future avenue for exploration.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

Infectious Jailbreak Feasibility. Feeding an adversarial image into the memory of any randomly chosen agent can achieve infectious jailbreak, causing all agents to exhibit harmful behaviors exponentially fast.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

Agent Smith Attack. The Agent Smith setup involves simulating a multi-agent environment where a single adversarial image can lead to widespread harmful behaviors across almost all agents.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

Facial Recognition Use Case. In the U.K., the London Metropolitan Police admitted to using facial recognition technology on tens of thousands of people attending King Charles III's coronation in May 2023.

data point • 3 months ago • Via Artificial Intelligence Made Simple •

WeRide IPO Plans. WeRide, a Chinese autonomous vehicle company, is seeking a $5.02 billion valuation in its U.S. IPO, aiming to raise about $96 million from the offering.

data point • 3 months ago • Via Last Week in AI • techcrunch.com

Falcon Mamba 7B Launch. The Technology Innovation Institute (TII) has introduced Falcon Mamba 7B, a new large language model that uses a State Space Language Model (SSLM) architecture, marking a shift from traditional transformer-based designs.

data point • 3 months ago • Via Last Week in AI • www.maginative.com

Figure 02 Introduction. Figure has introduced its latest humanoid robot, Figure 02, which is designed to work alongside humans in a factory setting.

data point • 3 months ago • Via Last Week in AI • techcrunch.com

AI-Driven 3D Generation. A research paper by scientists from Meta and Oxford University introduces VFusion3D, an AI-driven technique capable of generating high-quality 3D models from 2D images in seconds.

data point • 3 months ago • Via Last Week in AI • techcrunch.com

New Supercomputing Initiatives. A new supercomputing network aims to accelerate the development of artificial general intelligence (AGI) through a worldwide network of powerful computers.

data point • 3 months ago • Via Last Week in AI • www.livescience.com

AI Emotional Attachment Concerns. OpenAI is concerned about users developing emotional attachments to the GPT-4o chatbot, warning of potential negative impacts on human interactions.

insight • 3 months ago • Via Last Week in AI • www.techradar.com

Performance Verification. Falcon Mamba 7B has been independently verified by Hugging Face as the top-performing open-source SSLM globally, outperforming established transformer-based models in benchmark tests.

data point • 3 months ago • Via Last Week in AI •

AI Assistant at JPMorgan. JPMorgan Chase has rolled out a generative AI assistant to tens of thousands of its employees, designed to be as ubiquitous as Zoom.

data point • 3 months ago • Via Last Week in AI • www.cnbc.com

Artists' Lawsuit Progress. A class action lawsuit against AI companies Stability, Runway, and DeviantArt, filed by artists alleging copyright infringement, has been partially approved to proceed by a judge.

insight • 3 months ago • Via Last Week in AI •

AI Law in Europe. The world's first-ever AI law is now enforced in Europe, targeting US tech giants.

data point • 3 months ago • Via Last Week in AI • www.vcpost.com

AI News Summary. Hosts Andrey Kurenkov and John Krohn dive into significant updates and discussions in the AI world.

insight • 3 months ago • Via Last Week in AI •

Instagram AI Features. Instagram's new AI features allow people to create AI versions of themselves.

data point • 3 months ago • Via Last Week in AI • www.theverge.com

Waymo Rollout. Waymo's driverless cars have rolled out in San Francisco.

data point • 3 months ago • Via Last Week in AI • www.sfchronicle.com

NVIDIA Chip Issues. Nvidia reportedly delays its next AI chip due to a design flaw.

data point • 3 months ago • Via Last Week in AI • www.theverge.com

New AI Tools. Black Forest Labs releases Open-Source FLUX.1, a 12 Billion Parameter Rectified Flow Transformer capable of generating images from text descriptions.

data point • 3 months ago • Via Last Week in AI • www.marktechpost.com

Open-Source AI Stance. The White House says there is no need to restrict 'open-source' artificial intelligence — at least for now.

insight • 3 months ago • Via Last Week in AI • www.wdtn.com

Misinformation Impact. The impact of misinformation via deepfakes, particularly one involving Elon Musk, is also highlighted.

insight • 3 months ago • Via Last Week in AI • www.theverge.com

Common Regulatory Standards. Asking for standards and a degree of care in AI is common across many industries, contrasting with the fewer regulations on AI systems that could pose catastrophic risks.

insight • 3 months ago • Via Gary Marcus on AI •

Clarifications Requested. Concerns about inaccuracies in the essay lead to a request for reconsideration of the stance on SB-1047.

insight • 3 months ago • Via Gary Marcus on AI •

Need for Concrete Suggestions. While favoring AI governance, there are no positive, concrete suggestions offered for addressing risks such as mass casualties or large-scale cyberattacks.

insight • 3 months ago • Via Gary Marcus on AI •

Concerns on SB-1047. SB-1047 does not require predicting every use of an AI model, but focuses on specific, serious 'critical harms' such as mass casualties and large-scale cyberattacks.

insight • 3 months ago • Via Gary Marcus on AI •

Impact on Little Tech. Much of the bill's requirements are limited to models with training runs of $100 million+, which does not predominantly impact 'little-tech'.

insight • 3 months ago • Via Gary Marcus on AI •

Kill Switch Misunderstanding. The 'kill switch' requirement doesn't apply to open-source models once they are out of the original developer's control.

insight • 3 months ago • Via Gary Marcus on AI •

.

• 3 months ago • Via Simon Willison on Mastodon •

Chunking Strategy. Sentence-level chunking with a size of 512 tokens, using techniques like 'small-to-big' and 'sliding window', provides a good balance between information preservation and processing efficiency.

recommendation • 3 months ago • Via Artificial Intelligence Made Simple •

BERT Accuracy. A BERT-based classifier achieved high accuracy (over 95%) in determining retrieval needs.

data point • 3 months ago • Via Artificial Intelligence Made Simple •

Query Classification. Decides if retrieval is needed for a given query, helping keep costs down.

data point • 3 months ago • Via Artificial Intelligence Made Simple •

RAG Advantages. RAG speeds this up by having the AI find relevant contexts and aggregate them.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

RAG Definition. Retrieval Augmented Generation involves using AI to search a pre-defined knowledge base to answer user queries.

data point • 3 months ago • Via Artificial Intelligence Made Simple •

Cost Considerations. While modern RAG (especially generator-heavy setups) are more expensive than V0, the general principle is still useful to keep in mind.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

RAG System Recipes. The authors propose two distinct recipes for implementing RAG systems.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

Integration Benefits. Query Classification Module leads to an average improvement in overall score from 0.428 to 0.443 and a reduction in latency time from 16.41 to 11.58 seconds per query.

data point • 3 months ago • Via Artificial Intelligence Made Simple •

RAG vs Fine-Tuning. RAG outperforms fine-tuning with respect to injecting new sources of information into an LLM's responses.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

Fine-Tuning Focus. It’s best to keep the learning/information mainly to the data indexing.

recommendation • 3 months ago • Via Artificial Intelligence Made Simple •

Retrieval Methods Findings. The authors recommend monoT5 as a comprehensive method balancing performance and efficiency.

recommendation • 3 months ago • Via Artificial Intelligence Made Simple •

Hybrid Retrieval Success. Hybrid search, combining sparse and dense retrieval with HyDE, achieves the best retrieval performance.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

.

• 3 months ago • Via Simon Willison on Mastodon •

Airbnb Architecture Shift. In 2018, Airbnb began its migration to a service-oriented architecture due to challenges with maintaining their Ruby on Rails 'monorail'.

insight • 3 months ago • Via Artificial Intelligence Made Simple • www.infoq.com

Vocab Size Research. Research indicates that larger models deserve larger vocabularies, and increasing vocabulary size consistently improves downstream performance.

data point • 3 months ago • Via Artificial Intelligence Made Simple •

Confabulation Perspective. Hallucinations in large language models can be considered a potential resource instead of a categorically negative pitfall.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

GitHub CI/CD Insights. GitHub runs 15,000 CI jobs within an hour across 150,000 cores of compute.

data point • 3 months ago • Via Artificial Intelligence Made Simple •

Machine Learning Applications. Software engineers building applications using machine learning need to test models in real-world scenarios before choosing the best performing model.

recommendation • 3 months ago • Via Artificial Intelligence Made Simple •

RAG vs. LLMs. When resourced sufficiently, long-context LLMs consistently outperform Retrieval Augmented Generation in terms of average performance.

data point • 3 months ago • Via Artificial Intelligence Made Simple •

LLM Paper Notes. Jean David Ruvini posts his notes on LLM/NLP related papers every month, providing valuable insights.

insight • 3 months ago • Via Artificial Intelligence Made Simple • www.linkedin.com

Emergent Garden. Emergent Garden puts out very interesting videos on Life simulations, neural networks, cellular automata, and other emergent programs.

insight • 3 months ago • Via Artificial Intelligence Made Simple • www.youtube.com

Community Engagement. Devansh encourages individuals doing interesting work to drop their introduction in the comments for potential spotlight features.

recommendation • 3 months ago • Via Artificial Intelligence Made Simple •

Reading Recommendations. Devansh plans to share AI Papers/Publications, interesting books, videos, etc., each week.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

Supporting Independent Work. Devansh puts a lot of effort into creating work that is informative, useful, and independent from undue influence.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

Content Focus. The focus will be on AI and Tech, but ideas might range from business, philosophy, ethics, and much more.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

Meta's AI Studio Launch. Meta has launched a new tool called AI Studio, allowing users in the US to create AI versions of themselves on Instagram or the web.

data point • 3 months ago • Via Last Week in AI • www.theverge.com

Autonomous Driving Milestone. Stanford Engineering and Toyota Research Institute achieve a milestone in autonomous driving by creating the world’s first autonomous Tandem Drift team, using AI to direct two driverless cars to perform synchronized maneuvers.

data point • 3 months ago • Via Last Week in AI • engineering.stanford.edu

Concerns Over AI Alteration. Elon Musk shares deepfake video of Kamala Harris, potentially violating platform's policies against synthetic and manipulated media, sparking concerns about AI-altered content in the upcoming election.

insight • 3 months ago • Via Last Week in AI • www.theverge.com

AI Law in Europe. Europe enforces the world's first AI law, targeting US tech giants with regulations on AI development, deployment, and use.

data point • 3 months ago • Via Last Week in AI • www.vcpost.com

Perplexity AI's Revenue Share. Perplexity AI plans to share advertising revenue with news publishers whose content is used by the bot, responding to accusations of plagiarism and unethical web scraping.

insight • 3 months ago • Via Last Week in AI •

Funding for Black Forest Labs. Black Forest Labs, a startup founded by the creators of Stable Diffusion, has launched FLUX.1, a new text-to-image model suite for the open-source artificial intelligence community and secured $31 million in seed funding.

data point • 3 months ago • Via Last Week in AI • venturebeat.com

Musk's Revived Lawsuit. Elon Musk has reinitiated a lawsuit against OpenAI, the creator of the AI chatbot ChatGPT, reigniting a longstanding dispute that originated from a power conflict within the San Francisco-based startup.

data point • 3 months ago • Via Last Week in AI • www.nytimes.com

Focus on AI Alignment. Schulman, who played a key role in creating the AI-powered chatbot platform ChatGPT and led OpenAI's alignment science efforts, stated his move was driven by a desire to focus more on AI alignment and hands-on technical work.

insight • 3 months ago • Via Last Week in AI •

OpenAI Departures. OpenAI co-founder John Schulman has left the company to join rival AI startup Anthropic, while OpenAI president and co-founder Greg Brockman is taking an extended leave until the end of the year.

data point • 3 months ago • Via Last Week in AI • techcrunch.com

.

• 3 months ago • Via Simon Willison on Mastodon •

Monetization Intent. Altman wants to know - and monetize - everything about you.

insight • 3 months ago • Via Gary Marcus on AI •

Investment in Hardware. OpenAI just put in money in a $60M fundraise with a Webcam company and is planning hardware joint venture with them.

data point • 3 months ago • Via Gary Marcus on AI • www.theinformation.com

Security Expertise. OpenAI recently put Paul Nakasone (ex NSA) on the board.

data point • 3 months ago • Via Gary Marcus on AI •

WorldCoin Connection. Sam founded WorldCoin, known for their eye-scanning orb.

data point • 3 months ago • Via Gary Marcus on AI •

Data Collection Scale. ChatGPT has gathered unprecedented amounts of personal data.

data point • 3 months ago • Via Gary Marcus on AI •

Personal Data Training. Sam Altman has acknowledged wanting to train on everyone's personal documents (Word files, email etc).

data point • 3 months ago • Via Gary Marcus on AI •

Key Staff Departures. Over the last several months they have lost Ilya Sutskever, a whole bunch of safety people, and (slightly earlier) Andrej Karpathy.

data point • 3 months ago • Via Gary Marcus on AI •

Continuous Monitoring. Gary Marcus has had his eye on OpenAI for a long time.

recommendation • 3 months ago • Via Gary Marcus on AI •

Image Link. OpenAI's challenges appear visually notable.

data point • 3 months ago • Via Gary Marcus on AI • substackcdn.com

Future Prospects Doubted. Prospects don’t seem as strong as they once did.

insight • 3 months ago • Via Gary Marcus on AI •

Valuation Concerns. Will they earn enough to justify their $80B valuation?

insight • 3 months ago • Via Gary Marcus on AI •

Risk of WeWork Comparison. I said it before, and I will say it again: OpenAI could wind up being seen as the WeWork of AI.

insight • 3 months ago • Via Gary Marcus on AI •

Morale Issues Identified. The board, which basically said it couldn't trust Sam, may have had a point.

insight • 3 months ago • Via Gary Marcus on AI •

Election Misinformation. Five states suggested that Musk's AI chatbot has spread election misinformation.

insight • 3 months ago • Via Gary Marcus on AI • www.axios.com

Elon Musk's Lawsuit. Elon sued OpenAI again; the most interesting thing is that the suit could force a discussion of what AGI means – in court.

insight • 3 months ago • Via Gary Marcus on AI • www.nytimes.com

AGI Predictions. OpenAI tempered expectations for its next event, and said we wouldn't see GPT-5 then.

insight • 3 months ago • Via Gary Marcus on AI • garymarcus.substack.com

Nvidia Stock Decline. Nvidia dropped 6%, 20% over the last month.

data point • 3 months ago • Via Gary Marcus on AI • garymarcus.substack.com

Market Uncertainty. It is also not out of the question that today could end someday be seen as a turning point.

insight • 3 months ago • Via Gary Marcus on AI •

Google Antitrust Case. Google lost its antitrust case; it could have implications for Google's storehouse of AI training data.

insight • 3 months ago • Via Gary Marcus on AI • x.com

Cohere's Funding. AI startup Cohere raises US$500-million, valuing company at US$5.5-billion.

data point • 3 months ago • Via Last Week in AI • www.theglobeandmail.com

Meta's New AI Model. Meta releases open-source AI model it says rivals OpenAI, Google tech.

data point • 3 months ago • Via Last Week in AI • www.washingtonpost.com

OpenAI's SearchGPT. OpenAI announces SearchGPT, its AI-powered search engine.

data point • 3 months ago • Via Last Week in AI • www.theverge.com

Google's Gemini Model. Google gives free Gemini users access to its faster, lighter 1.5 Flash AI model.

data point • 3 months ago • Via Last Week in AI • www.engadget.com

Strike Over AI. Video game performers will go on strike over artificial intelligence concerns.

data point • 3 months ago • Via Last Week in AI • apnews.com

Legislative Actions. Democratic senators seek to reverse Supreme Court ruling that restricts federal agency power.

insight • 3 months ago • Via Last Week in AI • www.nbcnews.com

Impact of AI on Jobs. As new tech threatens jobs, Silicon Valley promotes no-strings cash aid.

insight • 3 months ago • Via Last Week in AI • www.npr.org

AI Safety Concerns. Senators demand OpenAI detail efforts to make its AI safe.

insight • 3 months ago • Via Last Week in AI • www.washingtonpost.com

AI in Mathematics. AI achieves silver-medal standard solving International Mathematical Olympiad problems.

data point • 3 months ago • Via Last Week in AI • deepmind.google

Historical Predictions. In December 2022, at the height of ChatGPT's popularity I made a series of seven predictions about GPT-4 and its limits, such as hallucinations and making stupid errors, in an essay called What to Expect When You Are Expecting GPT-4.

data point • 3 months ago • Via Gary Marcus on AI • open.substack.com

Strict Disbelief. I've always thought GenAI was overrated.

insight • 3 months ago • Via Gary Marcus on AI •

Consistent Predictions. In March of this year, I made a series of seven predictions about how this year would go. Every one of them has held firm, for every model produced by every developer ever since.

data point • 3 months ago • Via Gary Marcus on AI •

Warning About AI. Almost exactly a year ago, in August 2023, I was (AFAIK) the first person to warn that Generative AI could be a dud.

data point • 3 months ago • Via Gary Marcus on AI • garymarcus.substack.com

Investor Enthusiasm Diminishing. Investors may well stop forking out money at the rates they have, enthusiasm may diminish, and a lot of people may lose their shirts.

insight • 3 months ago • Via Gary Marcus on AI •

Generative AI Limitations. There is just one thing: Generative AI, at least we know it now, doesn't actually work that well, and maybe never will.

insight • 3 months ago • Via Gary Marcus on AI •

Imminent Collapse. The collapse of the generative AI bubble – in a financial sense – appears imminent, likely before the end of the calendar year.

insight • 3 months ago • Via Gary Marcus on AI •

AI Bubble Prediction. I just wrote a hard-hitting essay for WIRED predicting that the AI bubble will collapse in 2025 — and now I wish I hadn't.

insight • 3 months ago • Via Gary Marcus on AI •

.

• 3 months ago • Via Simon Willison on Mastodon •

AGI Misconceptions. Realizing neural networks struggle with outliers makes AGI seem like sheer fantasy, as no general solution to the outlier problem exists yet.

insight • 3 months ago • Via Gary Marcus on AI •

Symbolic vs Neural Networks. Symbolic systems have always been good for outliers; neural networks have always struggled with them.

insight • 3 months ago • Via Gary Marcus on AI •

Generative AI Expectations. GenAI sucks at outliers; if things are far enough from the space of trained examples, the techniques will fail.

insight • 3 months ago • Via Gary Marcus on AI •

AI Industry Bubble. An entire industry has been built - and will collapse - because people aren’t getting it regarding the outlier problem.

insight • 3 months ago • Via Gary Marcus on AI •

Cognitive Sciences Respect. AI researchers should have more respect for the cognitive sciences to make better advancements.

recommendation • 3 months ago • Via Gary Marcus on AI •

Historical Context. Machine learning had trouble with outliers in the 1990s, and it still does.

data point • 3 months ago • Via Gary Marcus on AI •

Outlier Problem Noted. Handling outliers is still the Achilles’ Heel of neural networks; this has been a constant issue for over a quarter century.

data point • 3 months ago • Via Gary Marcus on AI •

Median Split Insight. The key dividing line on the SAT math lies between those who understand fractions, and those who do not.

insight • 3 months ago • Via Gary Marcus on AI •

Machine Learning Limitations. Current approaches to machine learning are lousy at outliers, which means they often say and do things that are absurd when encountering unusual circumstances.

insight • 3 months ago • Via Gary Marcus on AI •

Burnout Society Overview. Byung-Chul Han describes how modern society primes us for burnout, reflecting on individual experiences in this context.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

Limitations of Achievement. The achievement society leads to a distorted view of life, reducing relationships and experiences to mere metrics of success.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

Engagement with Philosophy. The article recommends exploring philosophical perspectives like those of Nietzsche and Kierkegaard alongside Han's analysis for a broader understanding of the issues at hand.

recommendation • 3 months ago • Via Artificial Intelligence Made Simple •

Cultural Critique. While some critiques of Han's work resonate, there are also suggestions that engaging with craftsmanship can bring joy, countering the narrative of constant productivity.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

Effects of Boredom. Han highlights that deep boredom can lead to mental relaxation, contrasting with the hectic pace of contemporary life.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

Importance of Idleness. Han emphasizes the need for idle work, where tasks are done without worrying about results, to regain the right to be 'Human Beings' instead of 'Human Doings'.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

Self-Destructive Pressure. The achievement-subject experiences destructive self-reproach and auto-aggression, resulting in a mental war against themselves.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

Internalized Taskmaster. The internalized taskmaster becomes more insidious than any external authority, driving individuals to constantly strive for more.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

Impact of Positivity. In the achievement society, positivity becomes a dominant force, pushing individuals to be happier and more successful, leading to internalized pressure.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

Achievement Society Dynamics. Society has transitioned from a Discipline-based model to an Achievement-based one, driven by internal pressures to succeed.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

.

• 3 months ago • Via Simon Willison on Mastodon •

Survey Findings. The Upwork survey highlighted during the week reflects shifting sentiments around Generative AI.

data point • 3 months ago • Via Gary Marcus on AI • www.upwork.com

Warning on Deep Learning. Gary Marcus has been warning that deep learning was oversold since November 2012. Looks like he was right.

insight • 3 months ago • Via Gary Marcus on AI • www.newyorker.com

Opportunity for Resources. The fact that the GenAI bubble is apparently bursting sooner than expected may soon free up resources for other approaches, e.g., into neurosymbolic AI.

insight • 3 months ago • Via Gary Marcus on AI • garymarcus.substack.com

Loss of Faith. The bubble has begun to burst. Users have lost faith, clients have lost faith, VC's have lost faith.

insight • 3 months ago • Via Gary Marcus on AI •

Canceled Deal Reported. Business Insider reported a canceled deal, exacerbating concerns for the sector.

insight • 3 months ago • Via Gary Marcus on AI • stocks.apple.com

Investor Concerns. Microsoft's Chief Financial Officer painted a picture of a much slower burn, alarming some investors.

insight • 3 months ago • Via Gary Marcus on AI •

GenAI Project Canceled. Another GenAI monetization scheme bites the dust.

insight • 3 months ago • Via Gary Marcus on AI •

Generative AI Decline. Generative AI might be a dud; I just didn't expect it to fade so fast.

insight • 3 months ago • Via Gary Marcus on AI • garymarcus.substack.com

Legal Complications. Deepfakes challenge the reliability of digital evidence in court, potentially slowing legal processes.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

Education and Empowerment. The best regulation will, therefore, focus on equipping us with the skills needed to navigate this.

recommendation • 3 months ago • Via Artificial Intelligence Made Simple •

Age of Misinformation. We fail with Deepfakes because we fail with SoMe, resorting to ineffective cases for both- censorship and an abdication of personal responsibility.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

Combatting Environmental Concerns. Investing in more energy-efficient hardware and software for deepfake creation can significantly reduce energy consumption and emissions.

recommendation • 3 months ago • Via Artificial Intelligence Made Simple •

Environmental Impact. The energy-intensive process of generating deepfakes will contribute to climate change.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

Scams and Vulnerability. Deepfakes provide a new tool for scammers, especially in targeting emotionally vulnerable people.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

Labeling AI Content. I believe that heavily AI-generated content should be labeled, and people featured in AI Ads must have given explicit approval for their appearance.

recommendation • 3 months ago • Via Artificial Intelligence Made Simple •

Exploitation of Public Figures. Non-consensual use of deepfakes can dilute personal brands and harm fan relationships.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

Political Misinformation. The real danger lies in the lack of media literacy and critical thinking skills, exacerbated by political polarization.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

Combat Information Overload. The best way to combat the information overload created by Deepfakes is to empower people to stand on their own, interact with the world, and take care of themselves.

recommendation • 3 months ago • Via Artificial Intelligence Made Simple •

Need for Educational Reform. The way we see Education needs a rework- the emphasis on Courses, books, and degrees creates learners who are too static and passive.

recommendation • 3 months ago • Via Artificial Intelligence Made Simple •

Cognitive Overload. The most immediate and pervasive impact of deepfakes would be the cognitive overload and information fatigue they create.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

Deepfake Risks Discussion. The discussions around the risks from Deepfakes are incomplete (or wrong) since they overexaggerate some risks while ignoring others.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

.

• 3 months ago • Via Simon Willison on Mastodon •

FTC AI Investigation. FTC investigates how companies use AI to implement surveillance pricing based on consumer behavior and personal data, seeking information from eight major companies.

insight • 3 months ago • Via Last Week in AI • techcrunch.com

AI Scraping Backlash. AI companies are facing a growing backlash from website owners who are blocking their scraper bots, leading to concerns about the availability of data for AI training.

insight • 3 months ago • Via Last Week in AI • www.404media.co

Regulatory Pressure. Elon Musk's X platform is under pressure from data regulators after it emerged that users are consenting to their posts being used to build artificial intelligence systems via a default setting on the app.

insight • 3 months ago • Via Last Week in AI • amp-theguardian-com.cdn.ampproject.org

OpenAI Bankruptcy Risk. OpenAI faces potential bankruptcy with projected $5 billion losses due to high operational costs and insufficient revenue from its AI ventures.

insight • 3 months ago • Via Last Week in AI • www.windowscentral.com

AI Funding Surge. AI startups have raised $41.5 billion worldwide in the past five years, surpassing other industries and indicating a significant role for AI in the future development and modernization of various sectors.

data point • 3 months ago • Via Last Week in AI • www.trendingtopics.eu

Adobe Generative AI. Adobe introduces new generative AI features to Illustrator and Photoshop, including tools like Generative Shape Fill and Text to Pattern in Illustrator.

data point • 3 months ago • Via Last Week in AI • www.theverge.com

YouTube Search Deal. Google has become the exclusive search engine capable of surfacing results from Reddit, one of the internet's most significant sources of user-generated content.

data point • 3 months ago • Via Last Week in AI • www.404media.co

SearchGPT Launch. OpenAI has announced its entry into the search market with SearchGPT, an AI-powered search engine that organizes and makes sense of search results rather than just providing a list of links.

data point • 3 months ago • Via Last Week in AI • www.theverge.com

Mistral Large 2. Mistral AI has launched Mistral Large 2, a new generation of its flagship model, boasting 123 billion parameters and a 128k context window.

data point • 3 months ago • Via Last Week in AI • analyticsindiamag.com

Study Reference. Read Bjarnason's new essay here.

recommendation • 3 months ago • Via Gary Marcus on AI • www.baldurbjarnason.com

Organizational Expectations. Management's expectation that AI is a magic fix for the organizational catastrophe that is the mass layoff fad is often unfounded.

insight • 3 months ago • Via Gary Marcus on AI •

General Public Sentiment. Many coders and tech aficionados may love ChatGPT for work, but much of the outside world feels quite differently.

insight • 3 months ago • Via Gary Marcus on AI •

Unusual Study Results. It's quite unusual for a study like this on a new office tool to return such a resoundingly negative sentiment.

insight • 3 months ago • Via Gary Marcus on AI •

Negative AI Impact. Over three in four (77%) say AI tools have decreased their productivity and added to their workload in at least one way.

data point • 3 months ago • Via Gary Marcus on AI •

Productivity Concerns. Nearly half (47%) of workers using AI say they have no idea how to achieve the productivity gains their employers expect.

data point • 3 months ago • Via Gary Marcus on AI •

Generative AI Bubble. I fully expect that the generative AI bubble will begin to burst within the next 12 months, for many reasons.

insight • 3 months ago • Via Gary Marcus on AI •

Neurosymbolic AI Potential. AlphaProof and AlphaGeometry are both along the lines of first that we discussed, using formal systems like Cyc to vet solutions produced by LLMs.

insight • 3 months ago • Via Gary Marcus on AI •

Limitations of Generative AI. The biggest intrinsic failings of generative AI have to do with reliability, in a way that I believe can never be solved, given their inherent nature.

insight • 3 months ago • Via Gary Marcus on AI •

Progress by Google DeepMind. To do this GDM used not one but two separate systems, a new one called AlphaProof, focused on theorem proving, and an update (AlphaGeometry 2) to an older one focused on geometry.

data point • 3 months ago • Via Gary Marcus on AI • deepmind.google

Confidence in AI. On balance, these systems simply cannot be counted on, which is a bit part of why Fortune 500 companies have lost confidence in LLMs, after the initial hype.

data point • 3 months ago • Via Gary Marcus on AI •

Frustration with LLMs. My strong intuition... is that LLMs are simply never going to work reliably, at least not in the general form that so many people last year seemed to be hoping.

insight • 3 months ago • Via Gary Marcus on AI •

Need for Hybrid Models. What I have advocated for, my entire career, is hybrid approaches, sometimes called neurosymbolic AI, because they combine the best of the currently popular neural network approach with the symbolic approach.

recommendation • 3 months ago • Via Gary Marcus on AI •

Policy Issues. The U.S. is considering 'draconian' sanctions against China's semiconductor industry.

insight • 3 months ago • Via Last Week in AI • www.tomshardware.com

AI Video Model. Haiper 1.5 is a new AI video generation model challenging Sora and Runway.

data point • 3 months ago • Via Last Week in AI • venturebeat.com

Open Source Advancements. Mistral releases Codestral Mamba for faster, longer code generation.

data point • 3 months ago • Via Last Week in AI • venturebeat.com

GPT-4o Mini Release. OpenAI's release of GPT-4o Mini is a small AI model powering ChatGPT.

data point • 3 months ago • Via Last Week in AI • techcrunch.com

Internal Controversies. Whistleblowers say OpenAI illegally barred staff from airing safety risks.

insight • 3 months ago • Via Last Week in AI • www.washingtonpost.com

Elon Musk's Supercomputer. Elon Musk is working on a giant xAI supercomputer in Memphis.

data point • 3 months ago • Via Last Week in AI • www.forbes.com

.

• 3 months ago • Via Simon Willison on Mastodon •

.

• 3 months ago • Via Simon Willison on Mastodon •

Nvidia's Value. In weeks leading up to Nvidia becoming the most valuable company in the world, I’ve received numerous requests for the updated math behind my analysis.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

LLM Evaluation Technique. We explore the use of state-of-the-art LLMs, such as GPT-4, as a surrogate for humans.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

Human Evaluation Challenges. While human evaluation is the gold standard for assessing human preferences, it is exceptionally slow and costly.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

Future Articles. Deepfake Part 3. Exploring the true dangers of AI-generated misinformation.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

Active Subreddits. We started an AI Made Simple Subreddit. Come join us over here.

recommendation • 3 months ago • Via Artificial Intelligence Made Simple • www.reddit.com

AI's Investment Issues. Turns out a lot of the massive GPU purchase agreements and data center acquisitions were misguided and investing without a clear long-term vision and no understanding of revenue has lead to no ROI.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

Philosophy of Love. Dostoevsky's ideas about love are hopeful, optimistic, demanding, and terrifying.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

Importance of Stakeholder Alignment. The impact of getting stakeholder communication right vs wrong can be immense.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

Dhabawala Case Study. Mumbai’s Dhabawala service presents an interesting case study of what is required to make food delivery profitable.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

Community Engagement. If you’re doing interesting work and would like to be featured in the spotlight section, just drop your introduction in the comments/by reaching out to me.

recommendation • 3 months ago • Via Artificial Intelligence Made Simple •

AI Health Uncut. Sergei Polevikov publishes super insightful and informative reports on AI, Healthcare, and Medicine as a business.

insight • 3 months ago • Via Artificial Intelligence Made Simple • sergeiai.substack.com

Reading Recommendations. I figured I’d start sharing whatever AI Papers/Publications, interesting books, videos, etc I came across each week.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

Newsletter Reach. Help me democratize the most important ideas in AI Research and Engineering to over 100K readers weekly.

data point • 3 months ago • Via Artificial Intelligence Made Simple •

Semiconductor Industry Insights. The semiconductor capital equipment (semicap) industry is one of the most important industries on the planet and one that doesn’t get much love.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

Research Focus. The goal is to share interesting content with y'all so that you can get a peek behind the scenes into my research process.

insight • 3 months ago • Via Artificial Intelligence Made Simple •

Potential Fatal Questions. All of these questions are hard, with no obvious answer; the last may be fatal.

insight • 3 months ago • Via Gary Marcus on AI •

Accurate Predictions. Gary Marcus’s predictions over the last couple years have been astonishingly on target.

insight • 3 months ago • Via Gary Marcus on AI •

Investor Questions. But investors really ought to ask some tough questions, such as these: What is their moat?

recommendation • 3 months ago • Via Gary Marcus on AI •

Cash Raising Necessity. Obviously, their only hope is to raise more cash, and they will certainly try.

insight • 3 months ago • Via Gary Marcus on AI •

LLMs as Commodities. LLMs have just became exactly the commodity I predicted they would become, at the lowest possible price.

insight • 3 months ago • Via Gary Marcus on AI •

MetaAI Competition. Yesterday was something even more dramatic: MetaAI all but pulled the rug out from OpenAI's business, offering a viable competitor to GPT-4 for free.

insight • 3 months ago • Via Gary Marcus on AI •

Lack of Competitive Moat. OpenAI, as far as I can tell, doesn’t really have any moat whatsoever, beyond brand recognition.

insight • 3 months ago • Via Gary Marcus on AI •

Profit Predictions. That’s not great news for OpenAI, and you can see why they haven’t been, um, Open, about their financials.

insight • 3 months ago • Via Gary Marcus on AI •

OpenAI's Financial Issues. I have long suspected that OpenAI was losing money, and lots of it, but never seen an analysis, until this morning.

insight • 3 months ago • Via Gary Marcus on AI • www.theinformation.com

.

• 3 months ago • Via Simon Willison on Mastodon •

AI Training Data Ethics. A massive dataset containing subtitles from over 170,000 YouTube videos was used to train AI systems for major tech companies without permission, raising significant ethical and legal questions.

insight • 3 months ago • Via Last Week in AI • www.proofnews.org

Hugging Face SmoLLM. Hugging Face has introduced SmoLLM, a new series of compact language models available in three sizes: 130M, 350M, and 1.7B parameters.

data point • 3 months ago • Via Last Week in AI • analyticsindiamag.com

Llama 3.1 Parameters. With 405 billion parameters, Llama 3.1 was developed using over 16,000 Nvidia H100 GPUs, costing Meta hundreds of millions of dollars.

data point • 3 months ago • Via Last Week in AI •

Meta Llama 3.1 Release. Meta has released Llama 3.1, the largest open-source AI model, claiming it outperforms top private models like GPT-4o and Claude 3.5 Sonnet.

data point • 3 months ago • Via Last Week in AI • www.theverge.com

AI Security Standards. Top tech companies form a coalition to develop cybersecurity and safety standards for AI, aiming to ensure rigorous security practices and keep malicious hackers at bay.

recommendation • 3 months ago • Via Last Week in AI • www.axios.com

GPT-4o Mini Launch. OpenAI has launched GPT-4o mini, a smaller, faster, and more cost-effective AI model than its predecessors.

data point • 3 months ago • Via Last Week in AI • techcrunch.com

Market Demand for Small Models. The trend toward small language models is accelerating as Arcee AI announced its $24M Series A funding only 6 months after a $5.5M seed round in January 2024.

insight • 3 months ago • Via Last Week in AI • venturebeat.com

OpenAI Reasoning Project. OpenAI is developing a new reasoning technology called Project Strawberry, which aims to enable AI models to conduct autonomous research and improve their ability to answer difficult user queries.

data point • 3 months ago • Via Last Week in AI • techreport.com

GPT-4o Mini Performance. GPT-4o mini scored 82% on the MMLU reasoning benchmark and 87% on the MGSM math reasoning benchmark, outperforming other models like Gemini 1.5 Flash and Claude 3 Haiku.

data point • 3 months ago • Via Last Week in AI •

Data Augmentation Strategy. We will use a policy like TrivialAugment + StyleTransfer, for it's superior performance, cost, and benefits.

recommendation • 4 months ago • Via Artificial Intelligence Made Simple •

Self-Supervised Learning Application. Self-supervised clustering is elite for selecting the right samples to train on, helping to overcome scaling limits.

recommendation • 4 months ago • Via Artificial Intelligence Made Simple •

Audience Engagement Strategy. Every share puts me in front of a new audience, and I rely entirely on word-of-mouth endorsements to grow.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Model Performance Improvement. Our method uses a deep convolutional network trained to directly optimize the embedding itself, achieving state-of-the-art face recognition performance using only 128-bytes per face.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Sample Selection for Retraining. It’s best to add train samples based on maximizing information gain instead of simply adding more random ones.

recommendation • 4 months ago • Via Artificial Intelligence Made Simple •

Temporal Feature Analysis. If you want to take things up a notch, you’re best served going for temporal feature extraction.

recommendation • 4 months ago • Via Artificial Intelligence Made Simple •

Importance of Ensemble Modeling. Using simple models will keep inference costs low and allows an ensemble to compensate for the weakness of one model by sampling a more diverse search space.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Effective Feature Extraction. Feature extraction is the highest ROI decision you can make.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Record Accuracy Achieved. On the widely used Labeled Faces in the Wild (LFW) dataset, our system achieves a new record accuracy of 99.63%.

data point • 4 months ago • Via Artificial Intelligence Made Simple •

Deepfake Detection System. We hope to build a Deepfake Detection system that can classify between 3 types of inputs: real, deep-fake, and ai-generated.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

.

• 4 months ago • Via Simon Willison on Mastodon •

.

• 4 months ago • Via Simon Willison on Mastodon •

.

• 4 months ago • Via Simon Willison on Mastodon •

.

• 4 months ago • Via Simon Willison on Mastodon •

Global AI Regulation. Japan's Prime Minister Fumio Kishida unveils an international framework for the regulation and use of generative AI, emphasizing the need to address the potential risks and promote cooperation for safe and trustworthy AI.

data point • 4 months ago • Via Last Week in AI • apnews.com

AI in Healthcare. AI system trained on heart's electrical activity reduces deaths in high-risk patients by 31% in hospital trial, proving its potential to save lives.

data point • 4 months ago • Via Last Week in AI • www.newscientist.com

AI Notetaking Revolution. 'I will never go back': Ontario family doctor says new AI notetaking saved her job.

data point • 4 months ago • Via Last Week in AI • globalnews.ca

Shift to Enterprise Focus. AI startups that initially garnered attention with innovative generative AI products are now shifting their focus towards enterprise customers to enhance revenue streams.

insight • 4 months ago • Via Last Week in AI •

Meta's Ad Tool Issues. Meta's automated ad tool, Advantage Plus, has been overspending on ad budgets and failing to deliver sales, causing frustration among marketers and businesses.

insight • 4 months ago • Via Last Week in AI • www.theverge.com

Microsoft's AI Policy Change. Microsoft bans U.S. police from using enterprise AI tool for facial recognition due to concerns about potential pitfalls and racial biases.

data point • 4 months ago • Via Last Week in AI • techcrunch.com

Inverse Scaling Phenomenon. The authors also share their findings on the difficulty of creating and evaluating hard prompts, and the phenomenon of inverse scaling, where larger models fail tasks that smaller models can complete.

insight • 4 months ago • Via Last Week in AI •

Evaluation Challenges. The authors discuss the challenges of creating hard prompts and the trade-offs between human and model-based automatic evaluation.

insight • 4 months ago • Via Last Week in AI •

Vibe-Eval Suite. Reka AI introduces Vibe-Eval, a new evaluation suite designed to measure the progress of multimodal language models.

data point • 4 months ago • Via Last Week in AI • www.reka.ai

Burnout in AI Industry. AI engineers in the tech industry are experiencing burnout and rushed rollouts due to the intense competition and pressure to stay ahead in the generative AI race.

insight • 4 months ago • Via Last Week in AI • www.cnbc.com

Lawsuit Against OpenAI. Eight U.S. newspaper publishers, all under the ownership of investment firm Alden Global Capital, have filed a lawsuit against Microsoft and OpenAI, alleging copyright infringement.

data point • 4 months ago • Via Last Week in AI • www.cnbc.com

Deepfake Detector Release. OpenAI Releases 'Deepfake' Detector to Disinformation Researchers.

data point • 4 months ago • Via Last Week in AI • www.nytimes.com

AI Content Labeling. TikTok will automatically label AI-generated content created on platforms like DALL·E 3.

data point • 4 months ago • Via Last Week in AI • techcrunch.com

AI Audiobooks. Audible's Test of AI-Voiced Audiobooks Tops 40,000 Titles.

data point • 4 months ago • Via Last Week in AI • www.bloomberg.com

AI Export Bill. US lawmakers unveil bill to make it easier to restrict exports of AI models.

data point • 4 months ago • Via Last Week in AI • www.reuters.com

OpenAI & Stack Overflow. OpenAI and Stack Overflow partner to bring more technical knowledge into ChatGPT.

data point • 4 months ago • Via Last Week in AI • www.theverge.com

Robotaxi Plans Delayed. Motional delays commercial robotaxi plans amid restructuring.

data point • 4 months ago • Via Last Week in AI • techcrunch.com

Funding for Autonomy. Wayve, an A.I. Start-Up for Autonomous Driving, Raises $1 Billion.

data point • 4 months ago • Via Last Week in AI • www.nytimes.com

New AI Model. New Microsoft AI model may challenge GPT-4 and Google Gemini.

data point • 4 months ago • Via Last Week in AI • arstechnica.com

Siri Revamp. Apple Will Revamp Siri to Catch Up to Its Chatbot Competitors.

data point • 4 months ago • Via Last Week in AI • www.nytimes.com

Mystery Chatbot. Mysterious 'gpt2-chatbot' AI model appears suddenly, confuses experts.

data point • 4 months ago • Via Last Week in AI • arstechnica.com

AI Music Generation. ElevenLabs previews music-generating AI model.

data point • 4 months ago • Via Last Week in AI • venturebeat.com

Microsoft Copilot Upgrade. Microsoft is introducing new AI features in Copilot for Microsoft 365 to help users create better prompts and become prompt engineers, aiming to improve productivity and efficiency in the workplace.

recommendation • 4 months ago • Via Last Week in AI • www.theverge.com

TikTok AI Labeling. TikTok has announced that it will automatically label AI-generated content created on other platforms, such as OpenAI's DALL·E 3, using a technology called Content Credentials from the Coalition for Content Provenance and Authenticity (C2PA).

data point • 4 months ago • Via Last Week in AI • techcrunch.com

DeepSeek-V2 Features. DeepSeek AI releases DeepSeek-V2, a Mixture-of-Experts (MoE) language model, that is state-of-the-art, cost-effective, and efficient with 236B total parameters, of which 21B are activated for each token.

data point • 4 months ago • Via Last Week in AI •

Robot Dogs Testing. The United States Marine Forces Special Operations Command (MARSOC) is testing rifle-armed 'robot dogs' supplied by Onyx Industries.

data point • 4 months ago • Via Last Week in AI • www.twz.com

Advancements in Drug Discovery. AlphaFold 3 is expected to be particularly beneficial for drug discovery, as it can predict where a drug binds a protein, a feature that was absent in its predecessor, AlphaFold 2.

insight • 4 months ago • Via Last Week in AI •

AlphaFold 3 Overview. Google's DeepMind has unveiled AlphaFold 3, an advanced version of its protein structure prediction tool, which can now predict the structures of DNA, RNA, and essential drug discovery molecules like ligands.

data point • 4 months ago • Via Last Week in AI • www.technologyreview.com

AI Model Competition. Microsoft is developing a new large-scale AI language model called MAI-1, potentially rivaling state-of-the-art models from Google, Anthropic, and OpenAI.

insight • 4 months ago • Via Last Week in AI • arstechnica.com

AI Deepfake Detector. OpenAI releases a deepfake detector tool to combat the influence of AI-generated content on the upcoming elections, acknowledging that it's just the beginning of the fight against deepfakes.

recommendation • 4 months ago • Via Last Week in AI • www.nytimes.com

Wayve's $1 Billion Raise. Wayve, a London-based AI start-up for autonomous driving, raised an eye-popping $1 billion from investors like SoftBank, Microsoft, and Nvidia.

data point • 4 months ago • Via Last Week in AI • www.nytimes.com

AI and Deception. AI systems are becoming increasingly sophisticated in their capacity for deception, raising concerns about potential dangers to society and the need for AI safety laws.

insight • 4 months ago • Via Last Week in AI • www.theguardian.com

Safety Tool Release. U.K. Safety Institute releases an open-source toolset called Inspect to assess AI model safety, aiming to provide a shared, accessible approach to evaluations.

data point • 4 months ago • Via Last Week in AI • techcrunch.com

Google Media Models. Google unveils Veo and Imagen 3, its latest AI media creation models.

insight • 4 months ago • Via Last Week in AI • www.engadget.com

AI in Search. Google is redesigning its search engine — and it's AI all the way down.

insight • 4 months ago • Via Last Week in AI • www.theverge.com

Google AI Astra. Project Astra is the future of AI at Google.

insight • 4 months ago • Via Last Week in AI • www.theverge.com

OpenAI GPT-4o. OpenAI releases GPT-4o, a faster model that's free for all ChatGPT users.

insight • 4 months ago • Via Last Week in AI • www.theverge.com

Listener Interaction. Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai.

recommendation • 4 months ago • Via Last Week in AI •

Special Interview. With a special one-time interview with Andrey in the latter part of the podcast.

data point • 4 months ago • Via Last Week in AI •

YouTube Version. You can watch the youtube version of this here:

data point • 4 months ago • Via Last Week in AI •

Guest Host. With guest host Daliana Liu from The Data Scientist Show!

data point • 4 months ago • Via Last Week in AI • www.linkedin.com

AI News Summary. Our 167th episode with a summary and discussion of last week's big AI news!

data point • 4 months ago • Via Last Week in AI •

Anthropic AI Tool. Anthropic AI Launches a Prompt Engineering Tool that Generates Production-Ready Prompts in the Anthropic Console.

insight • 4 months ago • Via Last Week in AI • www.marktechpost.com

AI Copyright Issues. How One Author Pushed the Limits of AI Copyright.

insight • 4 months ago • Via Last Week in AI • www.wired.com

AI Watermark. Google's invisible AI watermark will help identify generative text and video.

insight • 4 months ago • Via Last Week in AI • www.theverge.com

AI Model Safety. U.K. agency releases tools to test AI model safety.

insight • 4 months ago • Via Last Week in AI • techcrunch.com

New AI Models. Falcon 2: UAE's Technology Innovation Institute Releases New AI Model Series, Outperforming Meta's New Llama 3.

insight • 4 months ago • Via Last Week in AI • www.businesswire.com

Waymo Investigation. Waymo's robotaxis under investigation after crashes and traffic mishaps.

insight • 4 months ago • Via Last Week in AI • techcrunch.com

Zoox Probe. US agency probes Amazon-owned Zoox self-driving vehicles after two crashes.

insight • 4 months ago • Via Last Week in AI • www.reuters.com

Robotaxi Testing. GM's Cruise to start testing robotaxis in Phoenix area with human safety drivers on board.

insight • 4 months ago • Via Last Week in AI • abcnews.go.com

Anthropic Leadership. Mike Krieger joins Anthropic as Chief Product Officer.

insight • 4 months ago • Via Last Week in AI • www.anthropic.com

OpenAI Leadership Change. OpenAI's Chief Scientist and Co-Founder Is Leaving the Company.

insight • 4 months ago • Via Last Week in AI • www.nytimes.com

AI Music Sandbox. Google Unveils Music AI Sandbox Making Loops From Prompts.

insight • 4 months ago • Via Last Week in AI • www.cnet.com

AI Emissions Concerns. Microsoft's emissions and water usage spiked due to the increased demand for AI technologies, posing challenges to meeting sustainability goals.

insight • 4 months ago • Via Last Week in AI • www.pcmag.com

Investment in AI. Microsoft announces a 4 billion euro investment in cloud and AI infrastructure, AI skilling, and French Tech acceleration.

data point • 4 months ago • Via Last Week in AI • news.microsoft.com

AI College Partnership. Reddit's partnership with OpenAI allows the AI company to train its models on Reddit content, leading to a surge in Reddit shares.

insight • 4 months ago • Via Last Week in AI •

Waymo Investigation. The National Highway Traffic Safety Administration (NHTSA) has initiated an investigation into Alphabet's Waymo self-driving vehicles following reports of unexpected behavior and traffic safety violations.

data point • 4 months ago • Via Last Week in AI • www.theverge.com

Transparency Issues. This news came amidst the release of ChatGPT 4o, but OpenAI's restrictive off-boarding agreement has raised concerns about the company's transparency.

insight • 4 months ago • Via Last Week in AI •

Multimodal Capabilities. The new model is 'natively multimodal,' meaning it can generate content or understand commands in voice, text, or images.

insight • 4 months ago • Via Last Week in AI •

OpenAI's GPT-4o Release. OpenAI has announced the release of GPT-4o, an enhanced version of the GPT-4 model that powers ChatGPT.

data point • 4 months ago • Via Last Week in AI • www.theverge.com

Astra's Functionality. Hassabis envisions AI's future to be less about the models and more about their functionality, with AI agents performing tasks on behalf of users.

insight • 4 months ago • Via Last Week in AI •

Project Astra Launch. Google's Project Astra, a real-time, multimodal AI assistant, is the future of AI at Google, according to Demis Hassabis, the head of Google DeepMind.

data point • 4 months ago • Via Last Week in AI • www.theverge.com

AI in Journalism. Gannett is implementing AI-generated bullet points at the top of journalists' stories to enhance the reporting process.

data point • 4 months ago • Via Last Week in AI • www.theverge.com

AI Legislation in Colorado. Colorado lawmakers have passed a landmark AI discrimination bill, which would prohibit employers from using AI to discriminate against workers.

data point • 4 months ago • Via Last Week in AI • www.jdsupra.com

AI Safety Commitments. Tech giants pledge AI safety commitments — including a ‘kill switch’ if they can't mitigate risks.

data point • 4 months ago • Via Last Week in AI • www.cnbc.com

Groundbreaking AI Law. World's first major law for artificial intelligence gets final EU green light.

data point • 4 months ago • Via Last Week in AI • www.cnbc.com

Emotional AI Initiative. Inflection AI reveals new team and plan to embed emotional AI in business bots.

data point • 4 months ago • Via Last Week in AI • venturebeat.com

Fetch AI Assistant. Microsoft, Khan Academy provide free AI assistant for all educators in US.

data point • 4 months ago • Via Last Week in AI • www.cnbc.com

AI Voice Concerns. OpenAI says Sky voice in ChatGPT will be paused after concerns it sounds too much like Scarlett Johansson.

data point • 4 months ago • Via Last Week in AI • www.tomsguide.com

AI Regulation Bill. Colorado governor signs sweeping AI regulation bill.

data point • 4 months ago • Via Last Week in AI • thehill.com

AI Likeness Management. Hollywood agency CAA aims to help stars manage their own AI likenesses.

data point • 4 months ago • Via Last Week in AI • techcrunch.com

Universal Basic Income. AI 'godfather' Geoffrey Hinton advocates for universal basic income to address AI's impact on job inequality and wealth distribution.

recommendation • 4 months ago • Via Last Week in AI • www.bbc.com

First AI Regulation. EU member states have approved the world's first major law for regulating artificial intelligence, emphasizing trust, transparency, and accountability.

data point • 4 months ago • Via Last Week in AI • www.cnbc.com

AI and Education. AI tutors are quietly changing how kids in the US study, offering affordable and personalized assistance for school assignments.

insight • 4 months ago • Via Last Week in AI • techcrunch.com

AI-Language Model War. Tencent and iFlytek have entered a price war by slashing prices of large-language models used for chatbots.

insight • 4 months ago • Via Last Week in AI • sg.news.yahoo.com

Generative AI Upgrade. Amazon is upgrading its decade-old Alexa voice assistant with generative artificial intelligence and plans to charge a monthly subscription fee.

insight • 4 months ago • Via Last Week in AI • www.cnbc.com

OpenAI's Response. OpenAI has temporarily halted the use of the Sky voice in its ChatGPT application due to its resemblance to actress Scarlett Johansson's voice.

insight • 4 months ago • Via Last Week in AI • www.tomsguide.com

Claude's Discoveries. One notable discovery was a feature associated with the Golden Gate Bridge, which, when activated, indicated that Claude was contemplating the landmark.

insight • 4 months ago • Via Last Week in AI •

Anthropic Research. A new research paper published by Anthropic aims to demystify the 'black box' phenomenon of AI's algorithmic behavior.

insight • 4 months ago • Via Last Week in AI • www.theverge.com

AI Launch Issues. This incident continues a trend of Google facing issues with its latest AI features immediately after their launch, as seen in February 2023.

insight • 4 months ago • Via Last Week in AI •

Trust Undermined. This has led to a significant backlash online, undermining trust in Google's search engine, which is used by over two billion people for reliable information.

insight • 4 months ago • Via Last Week in AI •

Google's AI Errors. Google's recent unveiling of its new artificial intelligence (AI) capabilities for search has sparked controversy due to a series of errors and untruths.

insight • 4 months ago • Via Last Week in AI • www.nytimes.com

Nvidia Revenue Surge. Nvidia, Powered by A.I. Boom, Reports Soaring Revenue and Profits.

data point • 4 months ago • Via Last Week in AI • www.nytimes.com

Content Deals with OpenAI. Vox Media and The Atlantic sign content deals with OpenAI.

data point • 4 months ago • Via Last Week in AI • www.theverge.com

PwC and OpenAI. PwC agrees deal to become OpenAI's first reseller and largest enterprise user.

data point • 4 months ago • Via Last Week in AI • www.cnbc.com

Hollywood AI Partnerships. Alphabet, Meta Offer Millions to Partner With Hollywood on AI.

data point • 4 months ago • Via Last Week in AI • www.bloomberg.com

AI Cloning Fines. Robocaller Who Used AI to Clone Biden's Voice Fined $6 Million.

data point • 4 months ago • Via Last Week in AI • www.theaiwired.com

AI Earbuds Innovation. Iyo thinks its gen AI earbuds can succeed where Humane and Rabbit stumbled.

data point • 4 months ago • Via Last Week in AI • techcrunch.com

Real-time Video Translation. Microsoft Edge will translate and dub YouTube videos as you’re watching them.

data point • 4 months ago • Via Last Week in AI • www.theverge.com

Alexa's AI Overhaul. Amazon plans to give Alexa an AI overhaul — and a monthly subscription price.

data point • 4 months ago • Via Last Week in AI • www.cnbc.com

Opera's AI Integration. Opera is adding Google's Gemini AI to its browser.

data point • 4 months ago • Via Last Week in AI • www.engadget.com

Telegram Copilot Bot. Telegram gets an in-app Copilot bot.

data point • 4 months ago • Via Last Week in AI • www.theverge.com

Google AI Controversy. Google's A.I. Search Errors Cause a Furor Online.

data point • 4 months ago • Via Last Week in AI • www.nytimes.com

AI News Summary. Our 169th episode with a summary and discussion of last week's big AI news!

insight • 4 months ago • Via Last Week in AI •

AI Model Rankings. Scale AI publishes its first LLM Leaderboards, ranking AI model performance in specific domains.

data point • 4 months ago • Via Last Week in AI • siliconangle.com

AI Safety Concerns. OpenAI researcher who resigned over safety concerns joins Anthropic.

data point • 4 months ago • Via Last Week in AI • www.theverge.com

Training Compute Growth. Training Compute of Frontier AI Models Grows by 4-5x per Year.

data point • 4 months ago • Via Last Week in AI • epochai.org

xAI Funding. Elon Musk's xAI raises $6 billion in latest funding round.

data point • 4 months ago • Via Last Week in AI • www.forbes.com.au

ChatGPT Discounts. OpenAI launches programs making ChatGPT cheaper for schools and nonprofits.

data point • 4 months ago • Via Last Week in AI • www.theverge.com

EU AI Act Developments. The EU is establishing the AI Office to regulate AI risks, foster innovation, and influence global AI governance.

insight • 4 months ago • Via Last Week in AI •

Deepfake Concerns. A deepfake video of a U.S. official discussing Ukraine's potential strikes in Russia has surfaced, raising concerns about the use of AI-powered disinformation.

insight • 4 months ago • Via Last Week in AI •

AI Misuse in Influencing Campaigns. Russia and China used OpenAI's A.I. in covert campaigns to manipulate public opinion and influence geopolitics, raising concerns about the impact of generative A.I. on online disinformation.

insight • 4 months ago • Via Last Week in AI •

AI Search Tool Rollback. Google's new artificial intelligence feature for its search engine, A.I. Overviews, has been significantly rolled back after it produced a series of errors and false information.

insight • 4 months ago • Via Last Week in AI •

PwC as OpenAI Reseller. OpenAI has partnered with consulting giant PwC to provide ChatGPT Enterprise, the business-oriented version of its AI chatbot, to PwC employees and clients.

insight • 4 months ago • Via Last Week in AI •

Vox Media and OpenAI Partnership. Vox Media has announced a strategic partnership with OpenAI, aiming to leverage AI technology to enhance its content and product offerings.

insight • 4 months ago • Via Last Week in AI •

Expensive AI Training Data. AI training data is becoming increasingly expensive, putting it out of reach for all but the wealthiest tech companies.

insight • 4 months ago • Via Last Week in AI •

Survey on AI Usage. AI products like ChatGPT are much hyped but not widely used, with only 2% of British respondents using such tools on a daily basis.

data point • 4 months ago • Via Last Week in AI •

OpenAI Board Conflict. OpenAI is also embroiled in controversy, with former board member Helen Toner accusing CEO Sam Altman of dishonesty and manipulation during a failed coup attempt.

insight • 4 months ago • Via Last Week in AI •

Musk's xAI Controversy. LeCun criticized Musk's leadership at xAI, calling him an erratic megalomaniac, following Musk's announcement of a $6 billion funding round for xAI.

insight • 4 months ago • Via Last Week in AI •

AI Industry Tensions. The AI industry is seeing increasing tension, highlighted by a recent clash between Elon Musk and Yann LeCun on social media.

insight • 4 months ago • Via Last Week in AI •

AI Video Generator. KLING is the latest AI video generator that could rival OpenAI's Sora.

data point • 4 months ago • Via Last Week in AI • the-decoder.com

AI Beauty Pageant. The Uncanny Rise of the World's First AI Beauty Pageant.

data point • 4 months ago • Via Last Week in AI • www.wired.com

GPT-4 Exam Performance. GPT-4 didn't ace the bar exam after all, MIT research suggests — it didn't even break the 70th percentile.

data point • 4 months ago • Via Last Week in AI • www.livescience.com

Election Risks. Testing and mitigating elections-related risks.

data point • 4 months ago • Via Last Week in AI • www.anthropic.com

OpenAI Whistleblowers. OpenAI Insiders Warn of a 'Reckless' Race for Dominance.

data point • 4 months ago • Via Last Week in AI • www.nytimes.com

AGI by 2027. Former OpenAI researcher foresees AGI reality in 2027.

data point • 4 months ago • Via Last Week in AI • cointelegraph.com

Tech Giants Collaboration. Google, Intel, Microsoft, AMD and more team up to develop an interconnect standard to rival Nvidia's NVLink.

data point • 4 months ago • Via Last Week in AI • www.pcgamer.com

Microsoft Layoffs. Microsoft Lays Off 1,500 Workers, Blames 'AI Wave'.

data point • 4 months ago • Via Last Week in AI • futurism.com

Zoox Self-Driving Cars. Zoox to test self-driving cars in Austin and Miami.

data point • 4 months ago • Via Last Week in AI • techcrunch.com

UAE AI Partnership. UAE seeks 'marriage' with US over artificial intelligence deals.

data point • 4 months ago • Via Last Week in AI • www.ft.com

Saudi Investment. Saudi fund invests in China effort to create rival to OpenAI.

data point • 4 months ago • Via Last Week in AI • www.ft.com

OpenAI Robotics Group. OpenAI is restarting its robotics research group.

data point • 4 months ago • Via Last Week in AI • www.therobotreport.com

Google's NotebookLM. Google's updated AI-powered NotebookLM expands to India, UK and over 200 other countries.

data point • 4 months ago • Via Last Week in AI • techcrunch.com

ElevenLabs Sound Effects. ElevenLabs’ AI generator makes explosions or other sound effects with just a prompt.

data point • 4 months ago • Via Last Week in AI • www.theverge.com

Perplexity AI Feature. Perplexity AI's new feature will turn your searches into shareable pages.

data point • 4 months ago • Via Last Week in AI • techcrunch.com

Udio 130 Model. Udio introduces new udio-130 music generation model and more advanced features.

data point • 4 months ago • Via Last Week in AI • braintitan.medium.com

Apple's AI Features. 'Apple Intelligence' will automatically choose between on-device and cloud-powered AI.

data point • 4 months ago • Via Last Week in AI • www.theverge.com

Amazon AI Impact. Amazon's use of AI and robotics in its warehouses isolates workers and hinders union organizing, according to a new report by Oxford University researchers.

insight • 4 months ago • Via Last Week in AI • www.404media.co

FTC Antitrust Investigations. FTC and DOJ open antitrust investigations into Microsoft, OpenAI, and Nvidia, with the FTC looking into potential antitrust issues related to investments made by technology companies into smaller AI companies.

data point • 4 months ago • Via Last Week in AI • www.theverge.com

Microsoft's AI Investment. Microsoft plans to invest $3.2 billion in AI infrastructure in Sweden, including training 250,000 people and increasing capacity at its data centers.

data point • 4 months ago • Via Last Week in AI • finance.yahoo.com

AI Chatbot Accuracy. AI chatbots, including Google’s Gemini 1.0 Pro and OpenAI’s GPT-3, provided incorrect information 27% of the time when asked about voting and the 2024 election.

data point • 4 months ago • Via Last Week in AI • www.nbcnews.com

Kuaishou's New Product. Kuaishou, a Chinese short-video app, has launched a text-to-video service similar to OpenAI's Sora, as part of the race among Chinese Big Tech firms to catch up with US counterparts in AI applications.

insight • 4 months ago • Via Last Week in AI •

Concept Storage Method. A new research paper from OpenAI introduces a method to identify how the AI stores concepts that might cause misbehavior.

data point • 4 months ago • Via Last Week in AI • cdn.openai.com

Whistleblower Protections. The proposal also calls for the abolition of nondisparagement agreements that prevent insiders from voicing risk-related concerns.

insight • 4 months ago • Via Last Week in AI •

Right to Warn. Thirteen current and former employees of OpenAI and Google DeepMind have published a proposal demanding the right to warn the public about the potential dangers of advanced artificial intelligence (AI).

data point • 4 months ago • Via Last Week in AI • www.vox.com

ChatGPT Outage. OpenAI's ChatGPT experienced multiple outages, including a major one during the daytime in the US, but the issues were eventually resolved.

data point • 4 months ago • Via Last Week in AI • www.theverge.com

Anticipating AGI. Former OpenAI researcher predicts the arrival of AGI by 2027, foreseeing AI machines surpassing human intelligence and national security implications.

insight • 4 months ago • Via Last Week in AI • cointelegraph.com

Regulatory Challenges. Waymo issues a voluntary software recall after a driverless vehicle collides with a telephone pole, prompting increased regulatory scrutiny of the autonomous vehicle industry.

insight • 4 months ago • Via Last Week in AI •

Deepfake Impact. AI played a significant role in the Indian election, with political parties using deepfakes and AI-generated content for targeted communication, translation of speeches, and personalized voter outreach.

insight • 4 months ago • Via Last Week in AI • theconversation.com

OpenAI Revenue Growth. OpenAI's annualized revenue has more than doubled in the last six months, reaching $3.4 billion.

data point • 4 months ago • Via Last Week in AI • www.pymnts.com

OpenAI Partnership. OpenAI and Apple announce partnership to integrate ChatGPT into Apple experiences.

data point • 4 months ago • Via Last Week in AI • openai.com

Generative Video Creation. Dream Machine enables users to create high-quality videos from simple text prompts such as 'a cute Dalmatian puppy running after a ball on the beach at sunset.'

insight • 4 months ago • Via Last Week in AI •

Luma AI Launch. Luma AI has launched the public beta of its new AI video generation model, Dream Machine, which has garnered overwhelming user interest.

data point • 4 months ago • Via Last Week in AI • siliconangle.com

Conversations with Siri. Key features include a more conversational Siri, AI-generated 'Genmoji,' and integration with OpenAI's GPT-4o for handling complex requests.

insight • 4 months ago • Via Last Week in AI •

Apple AI Features. Apple has announced 'Apple Intelligence,' a suite of AI features for iPhone, Mac, and more at WWDC 2024.

data point • 4 months ago • Via Last Week in AI • www.theverge.com

Meta's AI Models. Meta releases flurry of new AI models for audio, text and watermarking.

data point • 4 months ago • Via Last Week in AI • venturebeat.com

OpenAI Revenue Growth. Report: OpenAI Doubled Annualized Revenue in 6 Months.

data point • 4 months ago • Via Last Week in AI • www.pymnts.com

Claude 3.5 Release. Anthropic just dropped Claude 3.5 Sonnet with better vision and a sense of humor.

data point • 4 months ago • Via Last Week in AI • www.tomsguide.com

Runway Video Model. Runway unveils new hyper realistic AI video model Gen-3 Alpha, capable of 10-second-long clips.

data point • 4 months ago • Via Last Week in AI • venturebeat.com

Luma's Dream Machine. 'We don’t need Sora anymore': Luma’s new AI video generator Dream Machine slammed with traffic after debut.

data point • 4 months ago • Via Last Week in AI • venturebeat.com

New Apple Features. Apple Intelligence: every new AI feature coming to the iPhone and Mac.

data point • 4 months ago • Via Last Week in AI • www.theverge.com

Waymo's Recall. Waymo issues software and mapping recall after robotaxi crashes into a telephone pole.

data point • 4 months ago • Via Last Week in AI • www.theverge.com

Perplexity Controversy. Buzzy AI Search Engine Perplexity Is Directly Ripping Off Content From News Outlets.

data point • 4 months ago • Via Last Week in AI • www.forbes.com

Huawei's Chip Concerns. Huawei exec concerned over China's inability to obtain 3.5nm chips, bemoans lack of advanced chipmaking tools.

data point • 4 months ago • Via Last Week in AI • www.tomshardware.com

Reward Tampering Research. Sycophancy to subterfuge: Investigating reward tampering in language models.

data point • 4 months ago • Via Last Week in AI • www.anthropic.com

Adept and Microsoft Deal. AI startup Adept is in deal talks with Microsoft.

data point • 4 months ago • Via Last Week in AI • fortune.com

AI Influencer Ads. AI-generated avatars are being introduced on TikTok for brands to use in ads, allowing for customization and dubbing in multiple languages.

data point • 4 months ago • Via Last Week in AI • www.nytimes.com

AI Models Comparison. Fireworks AI releases Firefunction-v2, an open-source function-calling model designed to excel in real-world applications, rivaling high-end models like GPT-4o at a fraction of the cost and with superior speed and functionality.

insight • 4 months ago • Via Last Week in AI • www.marktechpost.com

Brave AI Enhancement. Brave's in-browser AI assistant, Leo, now incorporates real-time Brave Search results, providing more accurate and up-to-date answers.

data point • 4 months ago • Via Last Week in AI • brave.com

Revenue Loss Estimate. The publishing industry is expected to lose over $10 billion due to such practices, according to Ameet Shah, partner and SVP of publisher operations and strategy at Prohaska Consulting.

data point • 4 months ago • Via Last Week in AI •

Publisher Backlash. AI search startup Perplexity, backed by Jeff Bezos and other tech giants, is facing backlash from publishers like The New York Times, The Guardian, Condé Nast, and Forbes for allegedly circumventing blocks to access and repurpose their content.

data point • 4 months ago • Via Last Week in AI • www.adweek.com

Benchmark Test Performance. Claude 3.5 Sonnet excelled in benchmark tests, outscoring GPT-4o, Gemini 1.5 Pro, and Meta's Llama 3 400B in most categories.

data point • 4 months ago • Via Last Week in AI •

AI-Generated Script Backlash. London premiere of AI-generated script film cancelled after backlash from audience and industry, highlighting ongoing debate over AI's role in the film industry.

concern • 4 months ago • Via Last Week in AI • www.theguardian.com

Emotion Detection Controversy. AI-powered cameras in UK train stations, including London's Euston and Waterloo, used Amazon software to scan faces and predict emotions, age, and gender for potential advertising and safety purposes, raising concerns about privacy and reliability.

concern • 4 months ago • Via Last Week in AI • www.wired.com

Speed Improvement. The new model, which is available to Claude users on the web and iOS, and to developers, is said to be twice as fast as its predecessor and outperforms the previous top model, 3 Opus.

data point • 4 months ago • Via Last Week in AI •

Claude 3.5 Sonnet Launch. Anthropic has launched its latest AI model, Claude 3.5 Sonnet, which it claims can match or surpass the performance of OpenAI’s GPT-4o or Google’s Gemini across a broad range of tasks.

data point • 4 months ago • Via Last Week in AI • www.theverge.com

Gemini Side Panels. Google rolls out Gemini side panels for Gmail and other Workspace apps.

insight • 4 months ago • Via Last Week in AI • www.engadget.com

AI Music Lawsuits. Music labels sue AI music generators for copyright infringement.

insight • 4 months ago • Via Last Week in AI • arstechnica.com

AI Safety Bill. Y Combinator rallies start-ups against California's AI safety bill.

insight • 4 months ago • Via Last Week in AI • www.siliconrepublic.com

Stock Sale Policies. OpenAI walks back controversial stock sale policies, will treat current and former employees the same.

insight • 4 months ago • Via Last Week in AI • www.cnbc.com

Advanced AI Chip. China's ByteDance working with Broadcom to develop advanced AI chip, sources say.

insight • 4 months ago • Via Last Week in AI • theedgemalaysia.com

Figma AI Redesign. Figma announces big redesign with AI.

insight • 4 months ago • Via Last Week in AI • www.theverge.com

Waymo Robotaxis. Waymo ditches the waitlist and opens up its robotaxis to everyone in San Francisco.

insight • 4 months ago • Via Last Week in AI • www.theverge.com

ChatGPT for Mac. OpenAI's ChatGPT for Mac is now available to all users.

insight • 4 months ago • Via Last Week in AI • arstechnica.com

Voice Mode Delay. OpenAI delays rolling out its 'Voice Mode' to July.

insight • 4 months ago • Via Last Week in AI • www.channelnewsasia.com

Collaboration Tools. Anthropic Debuts Collaboration Tools for Claude AI Assistant.

insight • 4 months ago • Via Last Week in AI • www.pymnts.com

AI News Summary. Our 172nd episode with a summary and discussion of last week's big AI news!

data point • 4 months ago • Via Last Week in AI •

Formation Bio Investment. Formation Bio raises $372M in Series D funding to apply AI to drug development, aiming to streamline clinical trials and drug development processes.

data point • 4 months ago • Via Last Week in AI • techcrunch.com

Humanoid Robot Deployment. Agility Robotics' Digit humanoids have landed their first official job with GXO Logistics Inc., marking the industry's first formal commercial deployment of humanoids.

data point • 4 months ago • Via Last Week in AI • www.therobotreport.com

Google Translate Expansion. Google Translate has added 110 new languages, including Cantonese and Punjabi, bringing the total of supported languages to nearly 250.

data point • 4 months ago • Via Last Week in AI • lifehacker.com

AI Voice Imitations Controversy. Morgan Freeman expresses gratitude to fans for calling out unauthorized AI imitations of his voice, highlighting the growing issue of AI-generated voice imitations in the entertainment industry.

insight • 4 months ago • Via Last Week in AI • variety.com

Ethical AI Positioning. Anthropic aims to enable beneficial uses of AI by government agencies, positioning itself as an ethical choice among rivals.

insight • 4 months ago • Via Last Week in AI • techcrunch.com

New Collaboration Tools. Anthropic has launched an update to enhance team collaboration and productivity, introducing a Projects feature that allows users to organize their interactions with Claude.

data point • 4 months ago • Via Last Week in AI • www.pymnts.com

Kicking Off AI Usage. The company's expansion of its service to all San Francisco residents is seen as a crucial step towards the normalization of autonomous vehicles and a potential path to profitability for the historically money-losing operation.

insight • 4 months ago • Via Last Week in AI •

Waymo Expansion. Waymo announced that its robotaxi service in San Francisco is now open to the public, eliminating the need for customers to sign up for a waitlist.

data point • 4 months ago • Via Last Week in AI • www.theverge.com

AI Music Lawsuits. Universal Music Group, Sony Music, and Warner Records have filed lawsuits against AI music-synthesis companies Udio and Suno, accusing them of mass copyright infringement.

data point • 4 months ago • Via Last Week in AI • arstechnica.com

Performance Improvement. CriticGPT has shown significant effectiveness, with human reviewers using CriticGPT performing 60% better in evaluating ChatGPT's code outputs than those without such assistance.

data point • 4 months ago • Via Last Week in AI •

CriticGPT Introduction. OpenAI has introduced a new AI model, CriticGPT, designed to identify errors in the outputs of ChatGPT, an AI system built on the GPT-4 architecture.

data point • 4 months ago • Via Last Week in AI • www.marktechpost.com

AI Scaling Myths. The belief that AI scaling will lead to artificial general intelligence is based on misconceptions about scaling laws, the availability of training data, and the limitations of synthetic data.

insight • 4 months ago • Via Last Week in AI • www.aisnakeoil.com

Gaming AI Capabilities. MIT robotics pioneer Rodney Brooks believes that people are overestimating the capabilities of generative AI and that it's flawed to assign human capabilities to it.

insight • 4 months ago • Via Last Week in AI • techcrunch.com

LLaMA 3 Release. Meta is about to launch its biggest LLaMA model yet, highlighting its significance.

data point • 4 months ago • Via Last Week in AI • www.tomsguide.com

China's AI Competition. The conversation includes China's competition in AI and its impacts.

insight • 4 months ago • Via Last Week in AI •

AI Features Discussion. The episode covers emerging AI features and legal disputes over data usage.

insight • 4 months ago • Via Last Week in AI •

Workforce Development. U.S. government addresses critical workforce shortages for the semiconductor industry with a new program.

recommendation • 4 months ago • Via Last Week in AI • www.tomshardware.com

Nvidia's Revenue. Nvidia is expected to make $12 billion from AI chips in China this year despite US controls.

data point • 4 months ago • Via Last Week in AI • www.ft.com

AI Regulation Issues. With Chevron's demise, AI regulation seems dead in the water.

insight • 4 months ago • Via Last Week in AI • techcrunch.com

AI Video Fund. Bridgewater starts a $2 billion fund that uses machine learning for decision-making.

data point • 4 months ago • Via Last Week in AI • fortune.com

Runway's Gen 3 Alpha. Runway's Gen-3 Alpha AI video model is now available, but there’s a catch.

data point • 4 months ago • Via Last Week in AI • venturebeat.com

Gemini 1.5 Launch. Google's release of Gemini 1.5, Flash and Pro with 2M tokens to the public.

data point • 4 months ago • Via Last Week in AI • venturebeat.com

Security Flaw Discovered. OpenAI's ChatGPT macOS app was found to be storing user conversations in plain text, making them easily accessible to potential malicious actors.

data point • 4 months ago • Via Last Week in AI • www.theverge.com

AI Model Evaluation Advocacy. Anthropic is advocating for third-party AI model evaluations to assess capabilities and risks, focusing on safety levels, advanced metrics, and efficient evaluation development.

insight • 4 months ago • Via Last Week in AI • www.enterpriseai.news

AI Bias in Medical Imaging. AI models analyzing medical images can be biased, particularly against women and people of color, and while debiasing strategies can improve fairness, they may not generalize well to new patient populations.

recommendation • 4 months ago • Via Last Week in AI • medicalxpress.com

Apple's Board Role. Apple Inc. has secured an observer role on OpenAI's board, with Phil Schiller, Apple's App Store head and former marketing chief, appointed to the position.

data point • 4 months ago • Via Last Week in AI • www.bloomberg.com

Democratizing AI Access. Mozilla's Llamafile and Builders Projects were showcased at the AI Engineer World's Fair, emphasizing democratized access to AI technology.

insight • 4 months ago • Via Last Week in AI • thenewstack.io

Integrating ChatGPT. This move follows Apple's announcement to integrate ChatGPT into its iPhone, iPad, and Mac devices.

insight • 4 months ago • Via Last Week in AI •

AI Music Generation. Suno launches iPhone app — now you can make AI music on the go, which allows users to generate full songs from text prompts or sound.

data point • 4 months ago • Via Last Week in AI • www.tomsguide.com

New AI Model Release. Kyutai has open-sourced Moshi, a real-time native multimodal foundation AI model that can listen and speak simultaneously.

data point • 4 months ago • Via Last Week in AI • www.marktechpost.com

Mind-reading AI Progress. AI can accurately recreate what someone is looking at based on brain activity, greatly improved when the AI learns which parts of the brain to focus on.

insight • 4 months ago • Via Last Week in AI • www.newscientist.com

AI Coding Startup Valuation. AI coding startup Magic seeks $1.5-billion valuation in new funding round, aiming to develop AI models for writing software.

data point • 4 months ago • Via Last Week in AI • finance.yahoo.com

AI Lawsuits Implications. AI music lawsuits could shape the future of the music industry, as major labels sue AI firms for alleged copyright infringement.

insight • 4 months ago • Via Last Week in AI • www.billboard.com

AI Health Coach Collaboration. OpenAI and Arianna Huffington are collaborating on an 'AI health coach' that aims to provide personalized health advice and guidance based on individual data.

insight • 4 months ago • Via Last Week in AI •

FlashAttention-3 Efficiency. The results show that FlashAttention-3 achieves a speedup on H100 GPUs by 1.5-2.0 times with FP16 reaching up to 740 TFLOPs/s and with FP8 reaching close to 1.2 PFLOPs/s.

data point • 4 months ago • Via Last Week in AI •

Antitrust Concerns. These changes occur amid growing antitrust concerns over Microsoft's partnership with OpenAI, with regulators in the UK and EU scrutinizing the deal.

insight • 4 months ago • Via Last Week in AI •

Concerns Over AI Safety. OpenAI is facing safety concerns from employees and external sources, raising worries about the potential impact on society.

insight • 4 months ago • Via Last Week in AI • www.theverge.com

AI Video Model Development. Odyssey is developing an AI video model that can create Hollywood-grade visual effects and allow users to edit and control the output at a granular level.

data point • 4 months ago • Via Last Week in AI •

Regulatory Scrutiny Reaction. Microsoft has relinquished its observer seat on the board of OpenAI, a move that comes less than eight months after it secured the non-voting position.

data point • 4 months ago • Via Last Week in AI •

OpenAI Security Breach. In early 2022, a hacker infiltrated OpenAI's internal messaging systems, stealing information about the design of the company's AI technologies.

data point • 4 months ago • Via Last Week in AI •

Perception of Progress Assessment. Despite the introduction of this system, there is no consensus in the AI research community on how to measure progress towards AGI, and some view OpenAI's five-tier system as a tool to attract investors rather than a scientific measurement of progress.

insight • 4 months ago • Via Last Week in AI •

Advancements in AGI. OpenAI is reportedly close to reaching Level 2, or 'Reasoners,' which would be capable of basic problem-solving on par with a human with a doctorate degree.

data point • 4 months ago • Via Last Week in AI •

Current AI Level. OpenAI's technology, such as GPT-4o that powers ChatGPT, is currently at Level 1, which includes AI that can engage in conversational interactions.

data point • 4 months ago • Via Last Week in AI •

OpenAI's Five-Tier Model. OpenAI has introduced a five-tier system to track its progress towards developing artificial general intelligence (AGI).

data point • 4 months ago • Via Last Week in AI •

AMD Acquisition News. AMD plans to acquire Silo AI in a $665 million deal.

data point • 4 months ago • Via Last Week in AI • finance.yahoo.com

AI-generated Content Labels. Vimeo joins YouTube and TikTok in launching new AI content labels.

data point • 4 months ago • Via Last Week in AI • techcrunch.com

OpenAI and Health Coach. OpenAI and Arianna Huffington are working together on an 'AI health coach.'

data point • 4 months ago • Via Last Week in AI • www.theverge.com

Mind-Reading AI. Mind-reading AI recreates what you're looking at with amazing accuracy.

data point • 4 months ago • Via Last Week in AI • www.newscientist.com

New AI Features. Figma pauses its new AI feature after Apple controversy.

data point • 4 months ago • Via Last Week in AI • techcrunch.com

Content Regulation Pressure. There is a need for transparency and regulation in AI content labeling and licensing.

insight • 4 months ago • Via Last Week in AI •

AI Coding Startup. AI coding startup Magic seeks a $1.5-billion valuation in new funding round, sources say.

data point • 4 months ago • Via Last Week in AI • finance.yahoo.com

Elon Musk's GPU Plans. Elon Musk reveals plans to make the world's 'Most Powerful' 100,000 NVIDIA GPU AI cluster.

data point • 4 months ago • Via Last Week in AI • wccftech.com

AI Industry Challenges. We delve into the latest advancements and challenges in the AI industry, highlighting new features from Figma and Quora, regulatory pressures on OpenAI, and significant investments in AI infrastructure.

insight • 4 months ago • Via Last Week in AI •

AI's Limitations. LLMs are great at clustering similar things but 'regurgitating a lot of words with slight paraphrases while adding conceptually little, and understanding even less.'

insight • 4 months ago • Via Gary Marcus on AI •

Partial Regurgitation Defined. The term 'partial regurgitation' is introduced to describe AI's output not being a full reconstruction of the original source.

insight • 4 months ago • Via Gary Marcus on AI •

Regurgitation Process. The regurgitative process need not be verbatim.

insight • 4 months ago • Via Gary Marcus on AI •

Storage of Weights. Neural nets do store weights, but that doesn't mean that they know what they are talking about.

insight • 4 months ago • Via Gary Marcus on AI •

Neural Nets Critique. Gary Marcus criticizes neural nets, stating, 'Neural nets don't really understand anything, they read on the web.'

insight • 4 months ago • Via Gary Marcus on AI • garymarcus.substack.com

Understanding Proof. Partial regurgitation, no matter how fluent, does not, and will not ever, constitute genuine comprehension.

insight • 4 months ago • Via Gary Marcus on AI •

Need for New Approach. Getting to real AI will require a different approach.

recommendation • 4 months ago • Via Gary Marcus on AI •

Comparison to DeepMind. By comparison, GoogleDeepMind devotes a lot of its energy towards projects like AlphaFold that have clear potential to help humanity.

insight • 4 months ago • Via Gary Marcus on AI •

Safety Resources. Furthermore, OpenAI apparently hasn’t even fulfilled their own promises to devote 20% resources to AI safety.

insight • 4 months ago • Via Gary Marcus on AI •

Financial Priorities. Instead, they appear to be focused precisely on financial return, and appear almost indifferent to some the ways in which their product has already hurt large numbers of people (artists, writers, voiceover actors, etc).

insight • 4 months ago • Via Gary Marcus on AI •

Product Focus. The first step towards that should be a question about product – are the products we are making benefiting humanity?

recommendation • 4 months ago • Via Gary Marcus on AI •

OpenAI's Mission. As recently as November 2023, OpenAI promised in their filing as a nonprofit exempt from income tax to make AI that that 'benefits humanity … unconstrained by a need to generate financial return'.

data point • 4 months ago • Via Gary Marcus on AI •

Future of AI. Gary Marcus hopes that the most ethical company wins. And that we don’t leave our collective future entirely to self-regulation.

insight • 4 months ago • Via Gary Marcus on AI •

Ethical Concerns. The real issue isn’t whether OpenAI would win in court, it’s what happens to all of us, if a company with a track record for cutting ethical corners winds up first to AGI.

insight • 4 months ago • Via Gary Marcus on AI •

Unmet Safety Promises. OpenAI promised to devote 20% of its efforts to AI safety, but never delivered, according to a recent report.

insight • 4 months ago • Via Gary Marcus on AI • fortune.com

Call for Independent Oversight. Without independent scientists in the loop, with a real voice, we are lost.

recommendation • 4 months ago • Via Gary Marcus on AI •

Questioning Government Trust. It's correct for the public to take everything OpenAI says with a grain of salt, especially because of their massive power and chance to potentially put humanity at risk.

insight • 4 months ago • Via Gary Marcus on AI •

Tax Status Conflict. OpenAI filed for non-profit tax exempt status, claiming that the company's mission was to 'safely benefit humanity', even as they turn over almost half their profits to Microsoft.

insight • 4 months ago • Via Gary Marcus on AI •

Governance Promises Broken. Altman once promised that outsiders would play an important role in the company's governance; that key promise has not been kept.

insight • 4 months ago • Via Gary Marcus on AI • www.newyorker.com

Restrictive Employee Contracts. OpenAI had highly unusual contractual 'clawback' clauses designed to keep employees from speaking out about any concerns about the company.

insight • 4 months ago • Via Gary Marcus on AI • www.vox.com

Altman's Conflicts of Interest. Altman appears to have misled people about his personal holdings in OpenAI, omitting potential conflicts of interest between his role as CEO of the nonprofit OpenAI and other companies he might do business with.

insight • 4 months ago • Via Gary Marcus on AI •

CTO's Miscommunication. CTO Mira Murati embarrassed herself and the company in her interview with Joanna Stern of the Wall Street Journal, sneakily conflating 'publicly available' with 'public domain'.

insight • 4 months ago • Via Gary Marcus on AI •

Copyright Issues. OpenAI has trained on a massive amount of copyrighted material, without consent, and in many instances without compensation.

insight • 4 months ago • Via Gary Marcus on AI •

Misuse of Artist's Voice. OpenAI proceeded to make a Scarlett Johansson-like voice for GPT-4o, even after she specifically told them not to, highlighting their overall dismissive attitude towards artist consent.

insight • 4 months ago • Via Gary Marcus on AI • www.npr.org

OpenAI's Misleading Name. OpenAI called itself open, and traded on the notion of being open, but even as early as May 2016 knew that the name was misleading.

insight • 4 months ago • Via Gary Marcus on AI • substackcdn.com

Governance Representation. Sam Altman, 2016: 'We’re planning a way to allow wide swaths of the world to elect representatives to a new governance board.'

data point • 4 months ago • Via Gary Marcus on AI • www.newyorker.com

Questioning Authority. What happened to the wide swaths of the world? To quote Altman himself, 'Why do these fuckers get to decide what happens to me?'

insight • 4 months ago • Via Gary Marcus on AI •

Accountability Reminder. Gary Marcus keeps receipts.

insight • 4 months ago • Via Gary Marcus on AI •

.

• 4 months ago • Via Gary Marcus on AI •

Conflict of Interest. Sam has now divested his stake in that investment firm.

insight • 4 months ago • Via Gary Marcus on AI •

Toner's Whistleblowing. Toner was pushed out for her sin of speaking up.

insight • 4 months ago • Via Gary Marcus on AI •

Firing Consideration. The board had contemplated firing Sam over trust issues before that.

insight • 4 months ago • Via Gary Marcus on AI •

Safety Process Inaccuracy. Multiple occasions he gave inaccurate information about the small number of formal safety processes that the company did have in place.

insight • 4 months ago • Via Gary Marcus on AI •

ChatGPT Announcement. The board was not informed in advance about that [ChatGPT], we learned about ChatGPT on Twitter.

insight • 4 months ago • Via Gary Marcus on AI •

Sam's Deceit. Putting Toner's disclosures together with the other lies from OpenAI that I documented the other day, I think we can safely put Kara's picture of Sam the Innocent to bed.

insight • 4 months ago • Via Gary Marcus on AI •

Oversight Concerns. Altman is consolidating more and more power and seeming less and less on the level.

insight • 4 months ago • Via Gary Marcus on AI •

Lack of Candor. The (old) board never said that the firing of Sam was directly about safety, they said it was about candor.

insight • 4 months ago • Via Gary Marcus on AI •

Nonprofit Status. If they cannot assemble a board that respects the legal filings they made, and cannot behave in keeping with their oft-repeated promises, they must dissolve the nonprofit.

recommendation • 4 months ago • Via Gary Marcus on AI • www.citizen.org

Trust Issues. If they can't trust Altman, I don't see they can do their job.

insight • 4 months ago • Via Gary Marcus on AI •

Misleading Claims. Both read to me as deeply misleading, verging on defamatory.

insight • 4 months ago • Via Gary Marcus on AI •

Board Attacks. At least two proxies have gone after Helen Toner, one in The Economist, highbrow, one low (a post on X that got around 200,000 views).

data point • 4 months ago • Via Gary Marcus on AI • www.economist.com

Lack of Trust. The degree to which they diverted from that core issue that led to Sam's firing is genuinely disturbing.

insight • 4 months ago • Via Gary Marcus on AI •

ChatGPT Announcement. The board was not informed in advance about that. We learned about ChatGPT on Twitter.

data point • 4 months ago • Via Gary Marcus on AI •

Alignment Problem. We are no closer to a solution to the alignment problem now than we were then.

insight • 4 months ago • Via Gary Marcus on AI •

Unmet Expectations. For all the daily claims of 'exponential progress', reliability is still a dream.

insight • 4 months ago • Via Gary Marcus on AI •

Time's Ravages. What I said then to Bach still holds, 100%, 26 months later.

insight • 4 months ago • Via Gary Marcus on AI •

Deep Learning Critique. The ridicule started with my infamous 'Deep Learning is Hit a Wall' essay.

insight • 4 months ago • Via Gary Marcus on AI • garymarcus.substack.com

Longstanding Warnings. Gary Marcus has warned people about the limits of deep learning, including hallucinations, since 2001.

data point • 4 months ago • Via Gary Marcus on AI •

Musk's Shift. Musk has switched teams, flipping from calling for a pause to going all in on a technology that remains exactly as incorrigible as it ever was.

insight • 4 months ago • Via Gary Marcus on AI •

Slowing Innovation. Christoper Mims echoed a lot of what I have been arguing here largely, writing that 'The pace of innovation in AI is slowing, its usefulness is limited, and the cost of running it remains exorbitant.'

insight • 4 months ago • Via Gary Marcus on AI • www.wsj.com

No Breakthroughs. It has been almost two years since there’s been a bona fide GPT-4-sized breakthrough, despite the constant boasts of exponential progress.

insight • 4 months ago • Via Gary Marcus on AI •

Lackluster Fireside Chat. Melissa Heikkilä at Technology Review more or less panned Altman’s recent fireside chat at AI for Good.

data point • 4 months ago • Via Gary Marcus on AI • mailchi.mp

Financial Conflicts. The Wall Street Journal had a long discussion of Altman’s financial holdings and possible conflicts of interest.

data point • 4 months ago • Via Gary Marcus on AI • www.wsj.com

Bad Press for Altman. The bad press about Sam Altman and OpenAI, who once seemingly could do no wrong, just keeps coming.

insight • 4 months ago • Via Gary Marcus on AI •

Musk-LeCun Tension. Yann LeCun just pushed Elon Musk to the point of unfollowing him.

insight • 4 months ago • Via Gary Marcus on AI •

Kara Swisher's Bias. Paris Marx echoed my own feelings about Kara Swisher’s apparent lack of objectivity around Altman.

insight • 4 months ago • Via Gary Marcus on AI • disconnect.blog

Informed Endorsement. I fully endorse its four recommendations.

insight • 4 months ago • Via Gary Marcus on AI •

Gift Link Provided. Roose supplied a gift link.

data point • 4 months ago • Via Gary Marcus on AI • x.com

Key Contributors. The letter itself, cosigned by Bengio, Hinton, and Russell.

data point • 4 months ago • Via Gary Marcus on AI • righttowarn.ai

Common Sense Emphasis. Nowadays we both stress the absolutely essential nature of common sense, physical reasoning and world models, and the failure of current architectures to handle those well.

insight • 4 months ago • Via Gary Marcus on AI •

Future AI Development. If you want to argue that some future, as yet unknown form of deep learning will be better, fine, but with regards to what exists and is popular now, your view has come to mirror my own.

insight • 4 months ago • Via Gary Marcus on AI •

Critique Overlap. Your current critique for what is wrong with LLMs overlaps heavily with what I said repeatedly from 2018 to 2022.

insight • 4 months ago • Via Gary Marcus on AI •

Potential Alliance. The irony of all of this is that you and I are among the minority of people who have come to fully understand just how limited LLMs are, and what we need to do next. We should be allies.

recommendation • 4 months ago • Via Gary Marcus on AI •

Historical Dismissals. There is a clear pattern: you often initially dismiss my ideas, only to converge on the same place later — without ever citing my earlier arguments.

insight • 4 months ago • Via Gary Marcus on AI •

Funding Decline. Generative AI seed funding drops.

data point • 4 months ago • Via Gary Marcus on AI • pitchbook.com

Data Point Validity. Every data point there is imaginary; we aren’t plotting real things here.

insight • 4 months ago • Via Gary Marcus on AI •

Read Marcus's Book. Gary Marcus wrote his new book Taming Silicon Valley in part for the reason of addressing regulatory issues.

recommendation • 4 months ago • Via Gary Marcus on AI • garymarcus.substack.com

Regulatory Failure. Self-regulation is a farce, and the US legislature has made almost no progress thus far.

insight • 4 months ago • Via Gary Marcus on AI •

Underprepared for AGI. We are woefully underprepared for AGI whenever it comes.

insight • 4 months ago • Via Gary Marcus on AI •

Graph Issues. The double Y-axis makes no sense, and presupposes its own conclusion.

insight • 4 months ago • Via Gary Marcus on AI •

GPT-4 Comparisons. GPT-4 is not actually equivalent to a smart high schooler.

insight • 4 months ago • Via Gary Marcus on AI •

AGI Prediction. OpenAI's internal roadmap alleged that AGI would be achieved by 2027.

data point • 4 months ago • Via Gary Marcus on AI •

Industry Pushback. Both the well-known deep-learning expert Andrew Ng and the industry newspaper The Information came out against 1047 in vigorous terms.

data point • 4 months ago • Via Gary Marcus on AI •

Self-Regulation Skepticism. Big Tech's overwhelming message is 'Trust Us'. Should we?

insight • 4 months ago • Via Gary Marcus on AI •

Certification Requirements. Anyone training a 'covered AI model' must certify, under penalty of perjury, that their model will not be used to enable a 'hazardous capability' in the future.

data point • 4 months ago • Via Gary Marcus on AI •

Concern over Liability. Andrew Ng complains that the bill defines an unreasonable 'hazardous capability' designation that may make builders of large AI models liable if someone uses their models to do something that exceeds the bill's definition of harm.

insight • 4 months ago • Via Gary Marcus on AI •

Proposed Bill SB-1047. State Senator Scott Wiener and others in California have proposed a bill, SB-1047, that would build in some modest restrains around AI.

data point • 4 months ago • Via Gary Marcus on AI • leginfo.legislature.ca.gov

Serious Damage Definition. Hazardous is defined here as half a billion dollars in damage; should we give that AI industry a free pass no matter how much harm might be done?

insight • 4 months ago • Via Gary Marcus on AI •

Regulation vs. Innovation. The Information's op-ed complains that 'California's effort to regulate AI would stifle innovation', but never really details how.

insight • 4 months ago • Via Gary Marcus on AI •

Demand for Stronger Regulation. We should be making SB-1047 stronger, not weaker.

recommendation • 4 months ago • Via Gary Marcus on AI •

Regulatory Support Lack. Not one of the companies that previously stood up and said they support AI regulation is standing up for this one.

insight • 4 months ago • Via Gary Marcus on AI •

Kurzweil's Prediction. Ray Kurzweil confirmed he has not revised and not redefined his prediction of AGI, still believing that will happen by 2029.

data point • 4 months ago • Via Gary Marcus on AI •

Future Expectations. Expect more revisionism and downsized expectations throughout 2024 and 2025.

recommendation • 4 months ago • Via Gary Marcus on AI •

Kurzweil's New Projection. In an interview published in WIRED, Kurzweil let his predictions slip back, for the first time, to 2032.

data point • 4 months ago • Via Gary Marcus on AI • www.wired.com

Expectations for LLMs. The ludicrously high expectations from the last 18 ChatGPT-drenched months were never going to be met.

insight • 4 months ago • Via Gary Marcus on AI •

OpenAI's CTO Admission. OpenAI's CTO Mira Murati acknowledged that there is no mind blowing GPT-5 behind the scenes as of yet.

data point • 4 months ago • Via Gary Marcus on AI • x.com

Public Predictions. Nobody to my knowledge has kept systematic track of the predictions, but I took a quick and somewhat random look at X and had no trouble finding many predictions, going back to 2023, almost always optimistic.

data point • 4 months ago • Via Gary Marcus on AI •

GP-5 Training Status. Sam just a few weeks ago officially announced that they had only just started training GPT-5.

insight • 4 months ago • Via Gary Marcus on AI •

CTO Statement. Mira Murati promised we’d someday see 'PhD-level' models, the next big advance over today’s models, but not for another 18 months.

insight • 4 months ago • Via Gary Marcus on AI •

Delayed GPT-5 Arrival. Today is June 20 and I still don’t see squat. It would now appear that Business Insider’s sources were confused, or overstating what they knew.

insight • 4 months ago • Via Gary Marcus on AI •

Hallucination Concerns. Gary Marcus is still betting that GPT-5 will continue to hallucinate and make a bunch of wacky errors, whenever it finally drops.

insight • 4 months ago • Via Gary Marcus on AI •

Future Predictions Meme. Now arriving Gate 2024, Gate 2025, ... Gate 2026.

insight • 4 months ago • Via Gary Marcus on AI •

New Meme Observed. By now there’s actually a new meme in town. This one’s got even more views.

insight • 4 months ago • Via Gary Marcus on AI •

Confidence in Predictions. A lot of them got tons of views... What stands out the most, maybe, is the confidence with which a lot of them were presented.

insight • 4 months ago • Via Gary Marcus on AI •

Interpretation Misunderstanding. Gary Marcus misunderstood Ray Kurzweil to be revising his prediction for AGI to a later year (perhaps 2032).

insight • 4 months ago • Via Gary Marcus on AI •

Opposing Views on AGI. Gary Marcus stands by his own prediction that we will not see AGI by 2029, per criteria he discussed here.

insight • 4 months ago • Via Gary Marcus on AI • garymarcus.substack.com

Debate Potential. Ray Kurzweil and Gary Marcus talked about having a debate, which they hope will come to pass.

recommendation • 4 months ago • Via Gary Marcus on AI •

AGI Prediction Clarification. Ray Kurzweil confirmed he has not revised and not redefined his prediction of AGI, still defined as AI that can perform any cognitive task an educated human can, and still believes that will happen by 2029.

insight • 4 months ago • Via Gary Marcus on AI •

Starting Point. Gary Marcus thinks we have maybe one shot to get AI policy right in the US, and that we aren't off to a great start.

insight • 4 months ago • Via Gary Marcus on AI •

Reality Check Needed. We need a President who can sort truth from bullshit, in order to develop AI policies that are grounded in reality.

recommendation • 4 months ago • Via Gary Marcus on AI •

Corporate Promises. We need a President who can recognize when corporate leaders are promising things far beyond what is currently realistic.

recommendation • 4 months ago • Via Gary Marcus on AI •

Tech Hype Shift. The big tech companies are hyping AI with long term promises that are impossible to verify.

insight • 4 months ago • Via Gary Marcus on AI •

Presidential Understanding. We cannot afford to have a President in 2024 that doesn't fully grasp this.

recommendation • 4 months ago • Via Gary Marcus on AI •

Future AI Changes. AI is going to change everything, if not tomorrow, sometime over the next 5-20 years, some ways for good, some for bad.

insight • 4 months ago • Via Gary Marcus on AI •

Current AI Errors. Businesses are finally finding this out, too. (Headline in WSJ: 'AI Work Assistants Need a Lot of Handholding', because they are still riddled with errors.)

data point • 4 months ago • Via Gary Marcus on AI • www.wsj.com

AI Limitations. Generative AI does in fact (still) have enormous limitations, just as I anticipated.

data point • 4 months ago • Via Gary Marcus on AI •

AI Ignored. Neither president even mentioned AI, which was a travesty of a different sort.

insight • 4 months ago • Via Gary Marcus on AI • www.nytimes.com

Debate Performance. Former President (and convicted felon) Donald Trump lied like an LLM last night, but still won the debate, because Biden's delivery was so weak.

insight • 4 months ago • Via Gary Marcus on AI •

Understanding Science. Above all else, we need a President who understands and appreciates science.

recommendation • 4 months ago • Via Gary Marcus on AI •

Urgent AI Policies. We need a President who can get Congress to recognize the true urgency of the moment, since Executive Orders alone are not enough.

recommendation • 4 months ago • Via Gary Marcus on AI •

Importance of Symbols. I don’t think metacognition can work without bringing explicit symbols back into the mix; they seem essential for high-level reflection.

insight • 4 months ago • Via Gary Marcus on AI •

Funding Concerns. Spending upwards of 100 billion dollars on the current approach seems wasteful if it's unlikely to get to AGI or ever be reliable.

insight • 4 months ago • Via Gary Marcus on AI •

Call for Metacognition. Scaling is not the most interesting dimension; instead, we need techniques, such as metacognition, that can reflect on what is needed and how to achieve it.

insight • 4 months ago • Via Gary Marcus on AI • en.wikipedia.org

Skepticism on AGI. Many tech leaders have discovered that the best way to raise valuations is to hint that AGI is imminent.

insight • 4 months ago • Via Gary Marcus on AI •

Hope for Change. Gary Marcus hopes that people will take what Gates said seriously.

insight • 4 months ago • Via Gary Marcus on AI •

Neurosymbolic AI's Potential. Neurosymbolic AI has long been an underdog; in the end, I expect it to come from behind and be essential.

insight • 4 months ago • Via Gary Marcus on AI •

Need for Robust Software. Tech giants need serious commitment to software robustness.

insight • 4 months ago • Via Gary Marcus on AI •

Distress Over Regulation. Gary Marcus is deeply distressed that certain tech leaders and investors are putting massive support behind the presidential candidate least likely to regulate software.

insight • 4 months ago • Via Gary Marcus on AI •

AI Regulation Concerns. An unregulated AI industry is a recipe for disaster.

insight • 4 months ago • Via Gary Marcus on AI •

Shortsighted Innovation. Rushing innovative tech without robust foundations seems shortsighted.

insight • 4 months ago • Via Gary Marcus on AI •

Generative AI Limitations. Leaving more and more code writing to generative AI, which grasps syntax but not meaning, is not the answer.

insight • 4 months ago • Via Gary Marcus on AI • t.co

Black Box AI Issues. Chasing black box AI, difficult to interpret, and difficult to debug, is not the answer.

insight • 4 months ago • Via Gary Marcus on AI •

AI Engineering Techniques. As Ernie Davis and I pointed out in Rebooting AI, five years ago, part of the reason we are struggling with AI in complex AI systems is that we still lack adequate techniques for engineering complex systems.

insight • 4 months ago • Via Gary Marcus on AI •

Structural Integrity Lacking. Twenty years ago, Alan Kay said 'Most software today is very much like an Egyptian pyramid with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves.'

data point • 4 months ago • Via Gary Marcus on AI •

Software Reliability Needed. The world needs to up its software game massively. We need to invest in improving software reliability and methodology, not rushing out half-baked chatbots.

recommendation • 4 months ago • Via Gary Marcus on AI •

Integrating Prompt Testing. By running prompt tests regularly, we can catch issues early and ensure that prompts continue to perform well as you make changes and as the underlying LLMs are updated.

recommendation • 4 months ago • Via Artificial Intelligence Made Simple •

Evaluating LLM Outputs. Promptfoo offers various ways to evaluate the quality and consistency of LLM outputs.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Time Savings. Prompt testing saves time in the long run by catching bugs early and preventing regressions.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Introduction to Prompt Testing. Prompt testing is a technique specifically designed for testing LLMs and generative AI systems, allowing developers to write meaningful tests and catch issues early.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Testing Necessity. New LLM models are released, existing models are updated, and the performance of a model can shift over time.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Importance of Testing. LLMs can generate nonsensical, irrelevant, or even biased responses.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Newsletter Growth. Help me democratize the most important ideas in AI Research and Engineering to over 100K readers weekly.

data point • 4 months ago • Via Artificial Intelligence Made Simple •

Expert Contributions. In the series Guests, I will invite these experts to come in and share their insights on various topics that they have studied/worked on.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Getting Started with Prompt Testing. Integrating prompt testing into your development workflow is easy.

recommendation • 4 months ago • Via Artificial Intelligence Made Simple •

Conclusion on Testing. Prompt testing provides a way to write meaningful tests for these systems, helping catch issues early and save significant time in the development process.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Product-Centric Metrics. Evaluate models based on metrics aligned with business goals, such as click-through rate or user churn, to ensure they deliver tangible value.

recommendation • 4 months ago • Via Artificial Intelligence Made Simple •

Dynamic Validation. Continuously update validation datasets to reflect real-world data and capture evolving patterns, ensuring accurate performance assessments.

recommendation • 4 months ago • Via Artificial Intelligence Made Simple •

Active Model Evaluation. Keeping models effective requires active and rigorous evaluation processes.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Three Vs of MLOps. Success in MLOps hinges on three crucial factors: Velocity, Validation, and Versioning.

data point • 4 months ago • Via Artificial Intelligence Made Simple •

Frequent Retraining. Regularly retraining models on fresh, labeled data helps mitigate performance degradation caused by data drift and evolving user behavior.

recommendation • 4 months ago • Via Artificial Intelligence Made Simple •

Collaborative Success. Successful project ideas often stem from collaboration with domain experts, data scientists, and analysts.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Overemphasis on Models. A common mistake that teams make is to overemphasize the importance of models and underestimate how much the addition of simple features can contribute to performance.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

ML Engineer Tasks. ML engineers engage in four key tasks: data collection and labeling, feature engineering and model experimentation, model evaluation and deployment, and ML pipeline monitoring and response.

data point • 4 months ago • Via Artificial Intelligence Made Simple •

MLOps Investment. Investing in MLOps enables the development of 10x teams, which are more powerful in the long run.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

MLOps Importance. Organizations often underestimate the importance of investing in the right MLOps practices.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Sustaining Model Performance. Maintaining models post-deployment requires deliberate practices such as frequent retraining on fresh data, having fallback models, and continuous data validation.

recommendation • 4 months ago • Via Artificial Intelligence Made Simple •

Simplicity in Models. Prioritizing simple models and algorithms over complex ones can simplify maintenance and debugging while still achieving desired results.

recommendation • 4 months ago • Via Artificial Intelligence Made Simple •

Reducing Alert Fatigue. Focus on Actionable Alerts: Prioritize alerts that indicate real problems requiring immediate attention.

recommendation • 4 months ago • Via Artificial Intelligence Made Simple •

Alert Fatigue Awareness. A common pitfall in data quality monitoring is alert fatigue.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Data Leakage Prevention. Thorough Data Cleaning and Validation: Scrutinize your data for inconsistencies, missing values, and potential leakage points.

recommendation • 4 months ago • Via Artificial Intelligence Made Simple •

Risks with Jupyter Notebooks. Notebooks allow you to trade simplicity + velocity for quality.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Tools and Experience. Engineers like tools that enhance their experience.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Anti-Patterns in MLOps. Several anti-patterns hinder MLOps progress, including the mismatch between industry needs and classroom education.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Streamline Deployments. Streamlining deployments and tools that predict end-to-end gains could minimize wasted effort.

recommendation • 4 months ago • Via Artificial Intelligence Made Simple •

Long Tail of ML Bugs. Debugging ML pipelines presents unique challenges due to the unpredictable and often bespoke nature of bugs.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Handling Data Errors. These can be addressed by developing/buying tools for real-time data quality monitoring and automatic tuning of alerting criteria.

recommendation • 4 months ago • Via Artificial Intelligence Made Simple •

Data Error Handling. ML engineers face challenges in handling a spectrum of data errors, such as schema violations, missing values, and data drift.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Development-Production Mismatch. There are discrepancies between development and production environments, including data leakage; differing philosophies on Jupyter Notebook usage; and non-standardized code quality.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

ML Engineering Tasks. The 4 major tasks that an ML Engineer works on.

data point • 4 months ago • Via Artificial Intelligence Made Simple •

Audience Engagement. Help me democratize the most important ideas in AI Research and Engineering to over 100K readers weekly.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Machine Learning Breakdown. In my series Breakdowns, I go through complicated literature on Machine Learning to extract the most valuable insights.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Documenting Knowledge. To avoid this, prioritize documentation, knowledge sharing, and cross-training.

recommendation • 4 months ago • Via Artificial Intelligence Made Simple •

Tribal Knowledge Risks. Undocumented Tribal Knowledge can create bottlenecks and dependencies, hindering collaboration.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

C*-Algebraic ML. Looks like more and more people are looking to integrate Complex numbers into Machine Learning.

insight • 4 months ago • Via Artificial Intelligence Made Simple • arxiv.org

Saudi Arabia's Neom Project. The Saudi government had hoped to have 9 million residents living in 'The Line' by 2030, but this has been scaled back to fewer than 300,000.

data point • 4 months ago • Via Artificial Intelligence Made Simple •

Fractal Molecule Discovery. Researchers from Germany, Sweden, and the UK have discovered an enzyme produced by a single-celled organism that can arrange itself into a fractal.

data point • 4 months ago • Via Artificial Intelligence Made Simple • www.sciencealert.com

Software Design Principles. During the design and implementation process, I found that the following list of 'rules' kept coming back up over and over in various scenarios.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Generative AI Insights. Some really good insights on building Gen AI LinkedIn.

recommendation • 4 months ago • Via Artificial Intelligence Made Simple • www.linkedin.com

LLM Reading Notes. The May edition of my LLM reading note is out.

data point • 4 months ago • Via Artificial Intelligence Made Simple • www.linkedin.com

Drug Design Transformation. We hope AlphaFold 3 will help transform our understanding of the biological world and drug discovery.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

AlphaFold 3 Predictions. In a paper published in Nature, we introduce AlphaFold 3, a revolutionary model that can predict the structure and interactions of all life’s molecules with unprecedented accuracy.

data point • 4 months ago • Via Artificial Intelligence Made Simple • www.nature.com

Spotlight on Aziz. Mohamed Aziz Belaweid writes the excellent, 'Aziz et al. Paper Summaries', where he summarizes recent developments in AI.

recommendation • 4 months ago • Via Artificial Intelligence Made Simple • azizbelaweid.substack.com

AI Education Support. Your generosity is crucial to keeping our cult free and independent- and in helping me provide high-quality AI Education to everyone.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

AI Made Simple Community. We started an AI Made Simple Subreddit.

data point • 4 months ago • Via Artificial Intelligence Made Simple • www.reddit.com

Language Processing Potential. Text Diffusion might be the next frontier of LLMs, at least for specific types of tasks.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Efficient Time Series Imputation. CSDI, using score-based diffusion models, improves upon existing probabilistic imputation methods by capturing temporal correlations.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Emerging LLM Techniques. Microsoft's GENIE achieves comparable performance with state-of-the-art autoregressive models and generates more diverse text samples.

data point • 4 months ago • Via Artificial Intelligence Made Simple • dl.acm.org

Versatility of DMs. Diffusion models are applicable to a wide range of data modalities, including images, audio, molecules, etc.

data point • 4 months ago • Via Artificial Intelligence Made Simple •

Step-by-Step Control. The step-by-step generation process in diffusion models allows users to exert greater control over the final output, enabling greater transparency.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

AlphaFold 3 Innovation. Google's AlphaFold 3 is gaining a lot of attention for its potential to revolutionize bio-tech. One of the key innovations that led to its performance gains over previous methods was its utilization of diffusion models.

insight • 4 months ago • Via Artificial Intelligence Made Simple • blog.google

Diffusion Models Explained. Diffusion Models are generative models that follow 2 simple steps: First, we destroy training data by incrementally adding Gaussian noise. Training consists of recovering the data by reversing this noising process.

insight • 4 months ago • Via Artificial Intelligence Made Simple • substackcdn.com

High-Quality Generation. Diffusion models generate data with exceptional quality and realism, surpassing previous generative models in many tasks.

data point • 4 months ago • Via Artificial Intelligence Made Simple •

Application in Medical Imaging. Diffusion models have shown great promise in reconstructing Medical Images.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Greenwashing Example. Europe’s largest oil and gas company Shell was accused of selling millions of carbon credits tied to CO2 removal that never took place.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Pay What You Can. We follow a 'pay what you can' model, which allows you to support within your means.

data point • 4 months ago • Via Artificial Intelligence Made Simple • artificialintelligencemadesimple.substack.com

Share Interesting Content. The goal is to share interesting content with y’all so that you can get a peek behind the scenes into my research process.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Meta Llama-3 Release. Our first agent is a finetuned Meta-Llama-3-8B-Instruct model, which was recently released by Meta GenAI team.

data point • 4 months ago • Via Artificial Intelligence Made Simple •

Deep Learning Method Spotlight. The DSDL framework significantly outperforms other dynamical and deep learning methods.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Venture Capital Overview. A great overview by Rubén Domínguez Ibar about how Venture Capital make decisions.

insight • 4 months ago • Via Artificial Intelligence Made Simple • www.linkedin.com

AI Regulation Insight. The regulation is primarily based on how risky your use case is rather than what technology you use.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Fungal Computing Potential. Unlock the secrets of fungal computing! Discover the mind-boggling potential of fungi as living computers.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Gaming and Chatbots. Limited Risk AI Systems like chatbots or content generation require transparency to inform users they are interacting with AI.

data point • 4 months ago • Via Artificial Intelligence Made Simple •

High-Risk AI Systems. High-Risk AI Systems are involved in critical sectors like healthcare, education, and employment, where there's a significant impact on people's safety or fundamental rights.

data point • 4 months ago • Via Artificial Intelligence Made Simple •

Community Spotlight Resource. Kiki's Bytes is a super fun YouTube channel that covers various System Design case studies.

insight • 4 months ago • Via Artificial Intelligence Made Simple • www.youtube.com

Upcoming Articles Preview. Curious about what articles I’m working on? Here are the previews for the next planned articles.

data point • 4 months ago • Via Artificial Intelligence Made Simple •

Neural Networks Versatility. Thanks to their versatility, Neural Networks are a staple in most modern Machine Learning pipelines.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Credit Scoring Adaptation. Factors that predicted high creditworthiness a few years ago might not hold true today due to changing economic conditions or consumer behavior.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Evolving Language Models. Language Models trained on social media data need to adapt to constantly evolving language use, slang, and emerging topics.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Simplifying Data Augmentation. Before you decide to get too clever, consider the statement from TrivialAugment- the simplest method was so-far overlooked, even though it performs comparably or better.

recommendation • 4 months ago • Via Artificial Intelligence Made Simple •

Gradient Reversal Layer. The gradient reversal layer acts as an identity function during the forward pass but reverses gradients during backpropagation, creating a minimax game between the feature extractor and the domain classifier.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Impact on Sentiment Analysis. Our experiments on a sentiment analysis classification benchmark... show that our neural network for domain adaption algorithm has better performance than either a standard neural network or an SVM.

data point • 4 months ago • Via Artificial Intelligence Made Simple •

Adversarial Training Process. Domain-Adversarial Training (DAT) involves training a neural network with two competing objectives: to accurately perform the main task and to confuse a domain classifier that tries to distinguish between source and target domain data.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

The Role of DANN. DANNs theoretically attain domain invariance by learning domain-invariant features.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Mitigating Distribution Shift. Good data + adversarial augmentation + constant monitoring works wonders.

recommendation • 4 months ago • Via Artificial Intelligence Made Simple •

Sources of Distribution Shift. Possible sources of distribution shift include sample selection bias, non-stationary environments, domain adaptation challenges, data collection and labeling issues, adversarial attacks, and concept drift.

data point • 4 months ago • Via Artificial Intelligence Made Simple •

Understanding Distribution Shift. Distribution shift, also known as dataset shift or covariate shift, is a phenomenon in machine learning where the statistical distribution of the input data changes between the training and deployment environments.

data point • 4 months ago • Via Artificial Intelligence Made Simple •

Improving Generalization. There are several ways to improve generalization such as implementing sparsity and/or regularization to reduce overfitting and applying data augmentation to mithridatize your models.

recommendation • 4 months ago • Via Artificial Intelligence Made Simple •

Challenges in Neural Networks. There are several underlying issues with the training process that scale does not fix, chief amongst them being distribution shift and generalization.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Community and Introspection. Epicurus encouraged his followers to form close-knit communities that allow their members to step back and help each other critically analyze the events around them.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Friendship Statistics. People with no friends or poor-quality friendships are twice as likely to die prematurely, according to Holt-Lunstad's meta-analysis of more than 308,000 people.

data point • 4 months ago • Via Artificial Intelligence Made Simple • doi.org

Friendship Importance. Epicurus has a particularly strong emphasis on the importance of friendship as a must for a happy life.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Social Media Awareness. Epicurean philosophy is a good reminder to keep vigilant about how we’re being influenced by the constant subliminal messaging and to only pursue the pleasures that we want for ourselves.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Epicurean Philosophy. Epicurean philosophy is based on a simple supposition: we are happy when we remove the things that make us unhappy.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Reading Recommendation. The plan is to do one of these a month as a special reading recommendation.

recommendation • 4 months ago • Via Artificial Intelligence Made Simple •

Self-Reflection Necessity. A good community directly benefits self-reflection.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Happiness Through Simplicity. True happiness doesn’t come from endlessly chasing pleasure, but from systematically eliminating the sources of our unhappiness.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Research Areas. A lot of current research focuses on LLM architectures, data sources prompting, and alignment strategies.

data point • 4 months ago • Via Artificial Intelligence Made Simple •

Greater Performance Gains. AnglE consistently outperforms SBERT, achieving an absolute gain of 5.52%.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

AnglE Optimization. AnglE optimizes not only the cosine similarity between texts but also the angle to mitigate the negative impact of the saturation zones of the cosine function on the learning process.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Contrastive Learning Impact. Contrastive Learning encourages similar examples to have similar embeddings and dissimilar examples to have distinct embeddings.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Modeling Relations. RotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space uses complex numbers for knowledge graph embedding.

data point • 4 months ago • Via Artificial Intelligence Made Simple •

Complex Geometry Advantage. The complex plane provides a richer space to capture nuanced relationships and handle outliers.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Orthogonality Benefits. Orthogonality helps the model to capture more nuanced relationships and avoid unintended correlations between features.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Angular Representation. Focusing on angles rather than magnitudes avoids the saturation zones of the cosine function, enabling more effective learning and finer semantic distinctions.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Saturation Zones. The saturation zones of the cosine function can kill the gradient and make the network difficult to learn.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Challenges in Embeddings. Current Embeddings are held back by three things: Sensitivity to Outliers, Limited Relation Modeling, and Inconsistency.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Enhancing NLP. Good Embeddings allow three important improvements: Efficiency, Generalization, and Improved Performance.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Next-Gen Embeddings. Today we will primarily be looking at 4 publications to look at how we can improve embeddings by exploring a dimension that has been left untouched- their angles.

data point • 4 months ago • Via Artificial Intelligence Made Simple •

LLMs Hitting Wall. This is what leads to the impression that "LLMs are hitting a wall".

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Critical Flaws. Such developments have 3 inter-related critical flaws: They mostly work by increasing the computational costs of training and/or inference, they are a lot more fragile than people realize, and they are incredibly boring.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Mental Space for Writing. Writing/Research takes a lot of mental space, and I don’t think I could do a good job if I was constantly firefighting these issues.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Communication Efforts. I have started communication with both the reader, my company, and Stripe/Bank.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Long Review Process. I have been told the review by the bank could take up to 3 months.

data point • 4 months ago • Via Artificial Intelligence Made Simple •

Stripe's Negative Balance Policy. Stripe does not let you use future deposits to settle balances, which makes sense from their perspectives but leaves me in this weird situation.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Stripe Payouts Paused. Due to all of this, Stripe has paused all my payouts.

data point • 4 months ago • Via Artificial Intelligence Made Simple •

Financial Loss. I lose money on every fraud claim. In this case, Stripe has removed 70 USD from my Stripe account: 50 for the base plan + 20 in fees.

data point • 4 months ago • Via Artificial Intelligence Made Simple •

Fraudulent Claim Issue. Unfortunately, one of the readers missed this. They signed up for a 50 USD/year plan and marked that transaction as fraudulent, causing complications.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Indefinite Pause. AI Made Simple will be going on an indefinite pause now.

data point • 4 months ago • Via Artificial Intelligence Made Simple •

Change in Payout Schedule. I’ve switched the payout schedule to monthly to ensure that I always have a buffer in my Stripe Account to handle issues like this.

recommendation • 4 months ago • Via Artificial Intelligence Made Simple •

Client Payment Process. I am monetizing this newsletter through my employer- SVAM International (USA Work Laws bar me from taking money from anyone who is not my employer).

insight • 4 months ago • Via Artificial Intelligence Made Simple • artificialintelligencemadesimple.substack.com

Spline Usage. KANs use B-splines to approximate activation functions, providing accuracy, local control, and interpretability.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Interactive KANs. Users can collaborate with KANs through visualization tools and symbolic manipulation functionalities.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Explainability Benefits. KANs are more explainable, which is a big plus for sectors where model transparency is critical.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Accuracy of KANs. KANs can achieve lower RMSE loss with fewer parameters compared to MLPs for various tasks.

data point • 4 months ago • Via Artificial Intelligence Made Simple •

Performance and Training. KAN training is 10x slower than NNs which may limit their adoption in more mainstream directions that are dominated by scale.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Sparse Compositional Structures. A function has a sparse compositional structure when it can be built from a small number of simple functions, each of which only depends on a few input variables.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

KAN Advantages. KANs use learnable activation functions on edges, which makes them more accurate and interpretable, especially useful for functions with sparse compositional structures.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Kolmogorov-Arnold Representation. The KART states that any continuous function with multiple inputs can be created by combining simple functions of a single input (like sine or square) and adding them together.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

KAN Overview. This article will explore KANs and their viability in the new generation of Deep Learning.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Educational Importance. Even if we find fundamental limitations that make KANs useless, studying them in detail will provide valuable insights.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Grid Extension Technique. The grid extension technique allows KANs to adapt to changes in data distribution by increasing the grid density during training.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Need for Public Dialogue. Encouraging open dialogue and debate fosters critical thinking, raising awareness about oppression and empowering individuals to resist manipulation.

recommendation • 4 months ago • Via Artificial Intelligence Made Simple •

Technology and Risk. The lack of risk judgment and decision-making training is prevalent across roles and professions that most need it, revealing gaps in corporate risk management.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Current Gen Z Struggles. 67% of people 18 to 34 feel 'consumed' by their worries about money and stress, making it hard to focus, as part of the Gen Z mental health crisis.

data point • 4 months ago • Via Artificial Intelligence Made Simple • www.wsav.com

Societal Symptoms. Being 'busy with work' has become a default way for people to spend their time, symptomatic of what Arendt called the 'victory of the animal laborans.'

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Banality of Evil. Arendt argued that Adolf Eichmann's participation in the Holocaust was driven by thoughtlessness and blind obedience to authority, reflecting the concept of 'Banality of Evil.'

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Totalitarianism Origins. Arendt argued that totalitarianism was a new form of government arising from the breakdown of traditional society and an increasingly ungrounded populace.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

The Active Life Components. Hannah Arendt broke life down into 3 kinds of activities: Labor, Work, and Action, emphasizing that modern society deprioritizes the latter two.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Hannah Arendt Insights. Hannah Arendt was a 20th-century political theorist, well known for her thoughts on the nature of evil, the rise of totalitarianism, and her strong emphasis on the importance of living the 'active life.'

data point • 4 months ago • Via Artificial Intelligence Made Simple •

Challenge Comfort with Beliefs. Having good-faith conversations and the willingness to challenge deeply held beliefs is essential to fight dogma and ensure a society of free individuals.

recommendation • 4 months ago • Via Artificial Intelligence Made Simple •

AI Structural Concerns. The push for AI alignment by corporations may suppress inconvenient narratives, illustrating a paternalistic approach to technology.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

High Cost of Red-teaming. Good red-teaming can be very expensive since it requires a combination of domain expert knowledge and AI person knowledge for crafting and testing prompts.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

ACG Effectiveness. In the time that it takes ACG to produce successful adversarial attacks for 64% of the AdvBench set, GCG is unable to produce even one successful attack.

data point • 4 months ago • Via Artificial Intelligence Made Simple •

ACG Methodology. The Accelerated Coordinate Gradient (ACG) attack method combines algorithmic insights and engineering optimizations on top of GCG to yield a ~38x speedup.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Haize Labs Automation. Haize Labs seeks to rigorously test an LLM or agent with the purpose of preemptively discovering all of its failure modes.

insight • 4 months ago • Via Artificial Intelligence Made Simple • haizelabs.com

Shift in Gender Output. The base model generates approximately 80% male and 20% female customers while the aligned model generates nearly 100% female customers.

data point • 4 months ago • Via Artificial Intelligence Made Simple •

Bias Distribution Changes. The alignment process would likely create new, unexpected biases that were significantly different from your baseline model.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Lower Output Diversity. Aligned models exhibit lower entropy in token predictions, form distinct clusters in the embedding space, and gravitate towards 'attractor states', indicating limited output diversity.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

LLM Understanding. People often underestimate how little we understand about LLMs and the alignment process.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Adversarial Attack Generalization. The attack didn’t apply to any other model (including the base GPT).

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Low Safety Checks. Many of them are too dumb: The prompts and checks for what is considered a 'safe' model is too low to be meaningful.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Red-teaming Purpose. Red-teaming/Jailbreaking is a process in which AI people try to make LLMs talk dirty to them.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Content Focus. While the focus will be on AI and Tech, the ideas might range from business, philosophy, ethics, and much more.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Python Precision Issues. Python compares the integer value against the double precision representation of the float, which may involve a loss of precision, causing these discrepancies.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Deep Learning Insight. This paper presents a framework, HypOp, that advances the state of the art for solving combinatorial optimization problems in several aspects.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

AI-Relations Trend. The ratio of people who reach out to me for AIRel vs ML roles has gone up significantly over the last 2–3 months.

data point • 4 months ago • Via Artificial Intelligence Made Simple •

Model Performance Challenge. We demonstrate here a dramatic breakdown of function and reasoning capabilities of state-of-the-art models trained at the largest available scales.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Community Engagement. If you/your team have solved a problem that you’d like to share with the rest of the world, shoot me a message and let’s go over the details.

recommendation • 4 months ago • Via Artificial Intelligence Made Simple •

Reading Inspired. I figured I’d start sharing whatever AI Papers/Publications, interesting books, videos, etc I came across each week.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Subscriber Growth. Help me democratize the most important ideas in AI Research and Engineering to over 100K readers weekly.

data point • 4 months ago • Via Artificial Intelligence Made Simple •

Legal AI Evaluation. We argue that this claim is not supported by the current evidence, diving into AI’s roles in various legal tasks.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

TechBio Resources. We have a strong bio-tech focus this week b/c of all my reading into that space.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Performance Comparison. MatMul-Free LLMs (MMF-LLMs) achieve performance on-par with state-of-the-art Transformers that require far more memory during inference at a scale up to at least 2.7B parameters.

data point • 4 months ago • Via Artificial Intelligence Made Simple •

Training Efficiency Improvements. To counteract smaller gradients due to ternary weights, larger learning rates than those typically used for full-precision models should be employed.

recommendation • 4 months ago • Via Artificial Intelligence Made Simple •

Learning Rate Strategy. For the MatMul-free LM, the learning dynamics necessitate a different learning strategy, maintaining the cosine learning rate scheduler and then reducing the learning rate by half.

recommendation • 4 months ago • Via Artificial Intelligence Made Simple •

Memory Transfer Optimization. The Fused BitLinear Layer eliminates the need for multiple data transfers between memory levels, significantly reducing overhead.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Fused BitLinear Layer. The Fused BitLinear Layer combines operations and reduces memory accesses, significantly boosting training efficiency and lowering memory consumption.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Linear Layer Efficiency. Replacing non-linear operations with linear ones can boost your parallelism and simplify your overall operations.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Matrix Multiplication Bottleneck. Matrix multiplications (MatMul) are a significant computational bottleneck in Deep Learning, and removing them enables the creation of cheaper, less energy-intensive LLMs.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Simplified Operations. The secret to their great performance rests on a few innovations that follow two major themes- simplifying expensive computations and replacing non-linearities with linear operations.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Cost Reduction Strategies. The core idea includes restricting weights to the values {-1, 0, +1} to replace multiplications with simple additions or subtractions.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

GPU Efficiency. We also provide a GPU-efficient implementation of this model which reduces memory usage by up to 61% over an unoptimized baseline during training.

data point • 4 months ago • Via Artificial Intelligence Made Simple •

Weekly Reach. Help me democratize the most important ideas in AI Research and Engineering to over 100K readers weekly.

data point • 4 months ago • Via Artificial Intelligence Made Simple •

AI Expertise Invitation. In the series Guests, I will invite these experts to come in and share their insights on various topics that they have studied/worked on.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Choco Milk Cult. Our chocolate milk cult has a lot of experts and prominent figures doing cool things.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

AI-Human Relationship. The AI-human relationship dynamic is not something that I know much about.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Emotional Intelligence. Develop VCSAs to incorporate emotional intelligence to enhance user engagement and satisfaction.

recommendation • 4 months ago • Via Artificial Intelligence Made Simple •

Control Mechanisms. Ensure that VCSAs include features that give users a sense of control and the ability to communicate successfully with their devices.

recommendation • 4 months ago • Via Artificial Intelligence Made Simple •

Design for Imperfection. Design VCSAs to exhibit some level of imperfection to create relaxed interactions.

recommendation • 4 months ago • Via Artificial Intelligence Made Simple •

Managerial Implications. Encourage Partner-like interactions: use speech acts and algorithms to promote the perception of VCSAs as partners.

recommendation • 4 months ago • Via Artificial Intelligence Made Simple •

Partner Relationship. The perception of the relationship with the VCSA as a real partner attributes a distinct personality to the VCSA, making it an appealing entity.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Master Relationship. Some perceived the VCSA as a master, feeling like servants bound by its rules and unpredictable nature.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Servant Relationship. Young consumers frequently envisioned their VCSA as a servant that helps consumers realize their tasks.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Types of Relationships. From the results of the study three different relationships emerge: servant-master dynamic, dominant entity, and equal partners.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Controls and Preferences. Consumers may relate to anthropomorphized products either as others or as extensions of their self.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Self-extension Theory. If you think about the influence that particularly valuable products have on you, you increasingly consider them extensions of yourself.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Uncanny Valley. The Uncanny Valley represents clearly how different degrees of anthropomorphism can change our feelings and attitudes toward technologies and AI assistants.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Anthropomorphism Effects. Evidence shows that anthropomorphized products can enhance consumer preference, make products appear more vivid, and increase their perceived value.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Anthropomorphism Concept. Today's scholars focus on the broad concept of anthropomorphism: essentially, it is humans' tendency to perceive humanlike agents in nonhuman entities and events.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

VCSAs Definition. Alexa, Google Home, and similar devices fall into the category of so-called 'voice-controlled smart assistants' (VCSAs).

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Marriage Proposals. A good portion of those even said they would marry her.

data point • 4 months ago • Via Artificial Intelligence Made Simple • www.vocativ.com

Alexa Love. Amazon reported that half a million people told Alexa they loved her.

data point • 4 months ago • Via Artificial Intelligence Made Simple • www.geekwire.com

Human-like Interactions. When we interact with devices like Alexa or Google Home, we have different ways of thinking about ourselves and we relate to them differently from other people.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Skepticism on Technology. While I can’t imagine my life without tech, most of the activities that I enjoy are physical that would be very hard to simulate adequately.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Generational Perspective. I am a Gen Z kid who grew up with technology.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Inflection AI's Revenue Failure. Inflection AI’s revenue was, in the words of one investor, “de minimis.” Essentially zilch.

data point • 4 months ago • Via Artificial Intelligence Made Simple • www.nytimes.com

Data Contextuality in Healthcare Algorithms. A bombshell study found that a clinical algorithm many hospitals were using to decide which patients need care was showing racial bias.

data point • 4 months ago • Via Artificial Intelligence Made Simple • www.aclu.org

AGI and Reduction of Information. The implication of this on generalized intelligence is clear. Reducing the amount of information to focus on what is important to a clearly defined problem is antithetical to generalization.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Contextual Nature of Data. Good or bad data is defined heavily by the context.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Statistical Proxy Limitations. Within any dataset is an implicit value judgment of what we consider worth measuring.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Good Data Removes Noise. Good Data Doesn’t Add Signal; it Removes Noise.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Skepticism About Generalized Intelligence. Ultimately, my skepticism around the viability of 'generalized intelligence' emerging by aggregating comes from my belief that there is a lot about the world and its processes that we can’t model within data.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Issues with Self-Driving Cars. Self-driving cars do find merges challenging.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

AI Flattens Data Analysis. AI Flattens: By its very nature, AI works by abstracting the commonalities.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Data-Driven vs Mathematical Insights. My thesis can be broken into two parts. Firstly, I argue that Data-Driven Insights are a subclass of mathematical insights.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Yann LeCun's AGI Claim. Yann LeCunn has made headlines with his claims that 'LLMs are an off-ramp to AGI.'

insight • 4 months ago • Via Artificial Intelligence Made Simple •

AI's PR Campaign. This has led to a massive PR campaign to rehab AI's image and prepare for the next round of fundraising.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

AI's Financial Cost for Microsoft. This is costing Microsoft more than $650 million.

data point • 4 months ago • Via Artificial Intelligence Made Simple •

Generative AI Commercialization Struggles. Close to 2 years since the release of ChatGPT, organizations have struggled to commercialize on the promise of the Generative AI.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Curated Insights. In issues of Updates, I will share interesting content I came across.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

AI Market Hype. AI has many useful use cases, but it’s important to not allow yourself to get manipulated by people trying to piggy back off successful projects to sell their hype.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Knowledge Distillation. Knowledge distillation is a model training method that trains a smaller model to mimic the outputs of a larger model.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Impacts of FoodTech. The impact of food-related sciences is immense, proving that food is not just a basic necessity but a pivotal element in saving lives.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Security Challenges. Demand for high-performance chips designed specifically for AI applications is spiking.

insight • 4 months ago • Via Artificial Intelligence Made Simple • safeesteem.substack.com

AI Tokenization Method. The tokenizer for Claude 3 and beyond handles numbers quite differently to its competitors.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Reading Interest. If you want to keep your finger on your pulse for the tech-bio space, she’s an elite resource.

recommendation • 4 months ago • Via Artificial Intelligence Made Simple • marinatalamanou.substack.com

Technical Insight Source. Hai doesn’t shy away from talking about the Math/Technical Details, which is a rarity on LinkedIn.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Spotlight on Expertise. Hai Huang is a Senior Staff Engineer at Google, working on their AI for productivity projects.

data point • 4 months ago • Via Artificial Intelligence Made Simple • www.linkedin.com

Community Engagement. We started an AI Made Simple Subreddit.

data point • 4 months ago • Via Artificial Intelligence Made Simple • www.reddit.com

Reading Recommendations. I figured I’d start sharing whatever AI Papers/Publications, interesting books, videos, etc. I came across each week.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Subscriber Goal. Help me democratize the most important ideas in AI Research and Engineering to over 100K readers weekly.

data point • 4 months ago • Via Artificial Intelligence Made Simple •

SWE-Bench Overview. SWE-bench is a comprehensive evaluation framework comprising 2,294 software engineering problems sourced from real GitHub issues and their corresponding pull requests across 12 popular Python repositories.

data point • 4 months ago • Via Artificial Intelligence Made Simple • www.swebench.com

Agility in Code Editing. The experiments reveal that agents are sensitive to the amount of content displayed in the file viewer, and striking the right balance is essential for performance.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Optimizing Agent Interfaces. Human user interfaces may not always be the most suitable for agent-computer interactions, calling for improved localization through faster navigation and more informative search interfaces tailored to the needs of language models.

recommendation • 4 months ago • Via Artificial Intelligence Made Simple •

Improving Error Recovery. Implementing guardrails, such as a code syntax checker that automatically detects mistakes, can help prevent error propagation and assist agents in identifying and correcting issues promptly.

recommendation • 4 months ago • Via Artificial Intelligence Made Simple •

SWE-Agent Functionalities. SWE-Agent offers commands that enable models to create and edit files, streamlining the editing process into a single command that facilitates easy multi-line edits with consistent feedback.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Key ACI Properties. ACIs should prioritize actions that are straightforward and easy to understand to minimize the need for extensive demonstrations or fine-tuning.

recommendation • 4 months ago • Via Artificial Intelligence Made Simple •

Effective ACI Design. By designing effective ACIs, we can harness the power of language models to create intelligent agents that can interact with digital environments in a more intuitive and efficient manner.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

SWE-Agent Performance. When using GPT-4 Turbo as the base LLM, SWE-agent successfully solves 12.5% of the 2,294 SWE-bench test issues, significantly outperforming the previous best resolve rate of 3.8%.

data point • 4 months ago • Via Artificial Intelligence Made Simple •

Guest Contributions. In the series Guests, I will invite experts to share their insights on various topics that they have studied/worked on.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Adversarial AI Rise. Deepfakes typify the cutting edge of adversarial AI attacks, achieving a 3,000% increase last year alone; incidents are projected to rise by 50% to 60% in 2024.

data point • 4 months ago • Via Artificial Intelligence Made Simple • www.vpnranks.com

AI Functionality Potential. We believe this process creates artifacts or fingerprints that ML models can detect.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Early Project Insights. We were good at the main task but had terrible generalization and robustness.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Social Media Influence. AI models are starting to gain a lot of popularity online, with some influencers earning significant incomes.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Deepfake Detection Collaboration. If your organization deals with Deepfakes, reach out to customize the baseline solution to meet your specific needs.

recommendation • 4 months ago • Via Artificial Intelligence Made Simple •

Model Performance. Our best models scored very good results—top models achieving 0.93 (SVC), 0.82 (RandomForest), and 0.8 (XGBoost) respectively.

data point • 4 months ago • Via Artificial Intelligence Made Simple •

Affordable Detection Solutions. Many cutting-edge Deepfake Detection setups are too costly to run at scale, severely limiting their utility in high-scale environments like Social Media.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Detection Strategy Development. Our goal is to classify an input image into one of three categories real, deep-fake, and ai-generated, which helps organizations catch Deepfakes amidst enterprise frauds.

insight • 4 months ago • Via Artificial Intelligence Made Simple •

Enterprise Security Concerns. 60% of CISOs, CIOs, and IT leaders are afraid their enterprises are not prepared to defend against AI-powered threats and attacks.

data point • 4 months ago • Via Artificial Intelligence Made Simple •

Deepfake Market Growth. Deepfake-related losses are expected to soar from $12.3 billion in 2023 to $40 billion by 2027, growing at an astounding 32% compound annual growth rate.

data point • 4 months ago • Via Artificial Intelligence Made Simple • www2.deloitte.com