← Spaces

Research Space / AI

All • With links

OpenAI Hardware Move. OpenAI's move into hardware production is a significant development for the company.

insight • 1 day ago • Via Last Week in AI •

Tom Hanks Warning. Tom Hanks warns followers to be wary of 'fraudulent' ads using his likeness through AI.

insight • 1 day ago • Via Last Week in AI • www.nbcnews.com

China's Chip Advancements. China's chip capabilities are reportedly just 3 years behind TSMC, showcasing rapid advancements.

data point • 1 day ago • Via Last Week in AI • asia.nikkei.com

Investment in AI Companies. Ilya Sutskever's startup, Safe Superintelligence, raises $1B, signaling strong investor confidence in AI.

data point • 1 day ago • Via Last Week in AI • techcrunch.com

AI Regulation in California. California's pending AI regulation bill highlights growing governmental interest in AI oversight.

insight • 1 day ago • Via Last Week in AI • www.nytimes.com

AI Training Advances. Advances in training language models with long-context capabilities are emerging in the AI landscape.

insight • 1 day ago • Via Last Week in AI •

Amazon AI Robotics. Amazon's strategic acquisition in AI robotics is a notable event in the industry.

insight • 1 day ago • Via Last Week in AI •

Micro and Macro Impact. OSS is really good at solving big, important problems that affect tons of people.

insight • 2 days ago • Via Artificial Intelligence Made Simple •

Benefits of Sharing. Companies that share their software get better street cred, outsource a lot of R&D to people for free, and hook more people into their ecosystem.

insight • 2 days ago • Via Artificial Intelligence Made Simple •

Cost Reduction Strategies. Adopting preexisting OS tools allows companies to reduce costs, build more secure systems, and iterate quickly.

insight • 2 days ago • Via Artificial Intelligence Made Simple •

End-User Benefits. End-users benefit from AI-powered applications that are improved through open-source collaboration.

insight • 2 days ago • Via Artificial Intelligence Made Simple •

Developer Portfolio Boost. Participation in open-source AI projects enhances career prospects as developers build public portfolios showcasing expertise in a highly competitive field.

insight • 2 days ago • Via Artificial Intelligence Made Simple •

Complementary Forces. Open and Closed Software are often complementary forces that are blended together to create a useful end product.

insight • 2 days ago • Via Artificial Intelligence Made Simple •

Learning Budget Support. Many companies have a learning budget that you can expense this newsletter to.

data point • 2 days ago • Via Artificial Intelligence Made Simple • docs.google.com

Open Source Investment. Companies invest significantly in open-source software (OSS) for enhanced innovation and competitive advantage.

insight • 2 days ago • Via Artificial Intelligence Made Simple •

Diverse Contributor Benefits. OSS attracts a diverse set of contributors, leading to more efficient and innovative solutions.

insight • 2 days ago • Via Artificial Intelligence Made Simple •

Fostering Innovation. OSS leads to cheaper, safer, and more accessible products, all benefiting end users.

insight • 2 days ago • Via Artificial Intelligence Made Simple •

Invest in Community Building. It is critical for any group to invest in creating a developer-friendly open-source project through comprehensive documentation and community engagement.

recommendation • 2 days ago • Via Artificial Intelligence Made Simple •

Ecosystem Development. Collaborating with other organizations to create integrated AI solutions expands market opportunities.

recommendation • 2 days ago • Via Artificial Intelligence Made Simple •

Training and Support. Providing training and certification in open-source AI frameworks can also generate revenue and build a community of skilled users.

recommendation • 2 days ago • Via Artificial Intelligence Made Simple •

OSS and Innovation. Open-source projects tend to explore more novel directions, lacking the short-term profit motives of traditional companies.

insight • 2 days ago • Via Artificial Intelligence Made Simple •

AI Potential Advancements. These models represent a major leap forward in AI’s problem-solving potential, paving the way for new advancements in fields like medicine, engineering, and advanced coding tasks.

insight • 2 days ago • Via Last Week in AI •

Autonomous AI Agents. 1,000 autonomous AI agents collaborate to build their own society in a Minecraft server, forming a merchant hub and establishing a constitution.

insight • 2 days ago • Via Last Week in AI • www.trendwatching.com

Humanoid Robot Development. A robotics company in Silicon Valley has made significant progress in developing humanoid robots for real-world work scenarios.

data point • 2 days ago • Via Last Week in AI • techcrunch.com

DataGemma Introduction. Google introduces DataGemma, a pair of open-source AI models that address the issue of inaccurate answers in statistical queries.

data point • 2 days ago • Via Last Week in AI • venturebeat.com

Adobe Firefly Milestone. Adobe's Firefly Services, the company's AI-driven innovation, has reached a milestone of 12 billion generations.

data point • 2 days ago • Via Last Week in AI • www.pymnts.com

Runway AI Upgrade. AI video platform RunwayML has introduced a new video-to-video tool in its latest model, Gen-3 Alpha.

data point • 2 days ago • Via Last Week in AI • www.theverge.com

Corporate Structure Change. Sam Altman announced that the company's non-profit corporate structure will undergo changes in the coming year, moving away from being controlled by a non-profit.

data point • 2 days ago • Via Last Week in AI • fortune.com

API Costs High. For developers, however, it’s worth noting that the model takes much longer to produce outputs and the API costs for o1 are significantly higher than GPT-4o.

data point • 2 days ago • Via Last Week in AI •

Training Approach. What sets o1 apart is its training approach—unlike previous GPT models, which were trained to mimic data patterns, o1 uses reinforcement learning to think through problems, step by step.

insight • 2 days ago • Via Last Week in AI •

Reasoning Capabilities. OpenAI describes this release as a 'preview,' highlighting its early-stage nature, and positioning o1 as a significant advancement in reasoning capabilities.

insight • 2 days ago • Via Last Week in AI •

OpenAI o1 Model. OpenAI has introduced this new model as part of a planned series of 'reasoning' models aimed at tackling complex problems more efficiently than ever before.

data point • 2 days ago • Via Last Week in AI • www.theverge.com

Microsoft's Usage Caps. Microsoft's Inflection adds usage caps for Pi, new AI inference services by Cerebrus Systems competing with Nvidia.

insight • 2 days ago • Via Last Week in AI •

U.S. Restrictions on China. U.S. gov't tightens China restrictions on supercomputer component sales.

insight • 2 days ago • Via Last Week in AI • www.tomshardware.com

AI Advancements. Google's AI advancements with Gemini 1.5 models and AI-generated avatars, along with Samsung's lithography progress.

insight • 2 days ago • Via Last Week in AI •

Chinese GPU Access. Chinese Engineers Reportedly Accessing NVIDIA's High-End AI Chips Through Decentralized 'GPU Rental Services'.

insight • 2 days ago • Via Last Week in AI • wccftech.com

Elon Musk's Support. Elon Musk voices support for California bill requiring safety tests on AI models.

insight • 2 days ago • Via Last Week in AI • www.reuters.com

Poll on SB1047. Poll: 7 in 10 Californians Support SB1047, Will Blame Governor Newsom for AI-Enabled Catastrophe if He Vetoes.

data point • 2 days ago • Via Last Week in AI • mailchi.mp

AI Regulation. AI regulation discussions including California's SB1047, China's AI safety stance, and new export restrictions impacting Nvidia's AI chips.

insight • 2 days ago • Via Last Week in AI •

Bias in AI. Biases in AI, prompt leak attacks, and transparency in models and distributed training optimizations, including the 'distro' optimizer.

insight • 2 days ago • Via Last Week in AI •

.

• 5 days ago • Via Simon Willison on Mastodon •

Altman's AGI Stance. Altman had, much to my surprise, just echoed my longstanding position that current techniques alone would not be enough to get to AGI.

insight • 6 days ago • Via Gary Marcus on AI •

GPT-4 Prediction. “Still flawed, still limited, seem more impressive on first use”. Almost exactly what I predicted we would see with GPT-4, back on Christmas Day 2022.

insight • 6 days ago • Via Gary Marcus on AI •

Synthetic Data Dependence. The new system appears to depend heavily on synthetic data, and that such data may be easier to produce in some domains (such as those in which o1 is most successful, like some aspects of math) than others.

insight • 6 days ago • Via Gary Marcus on AI •

Update on Strawberry. OpenAI’s latest, GPT o1, code named Strawberry, came out.

data point • 6 days ago • Via Gary Marcus on AI • x.com

Marcus' Dream. Gary Marcus continue to a dream of day in which AI research doesn’t center almost entirely around LLMs.

insight • 6 days ago • Via Gary Marcus on AI •

Content Recommendations. I figured I’d start sharing whatever AI Papers/Publications, interesting books, videos, etc I came across each week.

insight • 6 days ago • Via Artificial Intelligence Made Simple •

AI Summary Study. The reviewers’ overall feedback was that they felt AI summaries may be counterproductive and create further work because of the need to fact-check and refer to original submissions.

insight • 6 days ago • Via Artificial Intelligence Made Simple • www.crikey.com.au

Green Powders Marketing. Good video on the misleading marketing behind Green Powders.

insight • 6 days ago • Via Artificial Intelligence Made Simple • youtu.be

Roaring Bitmaps Impact. By storing these indices as Roaring bitmaps, we are able to easily evaluate typical boolean filters efficiently, reducing latencies by 500 orders of magnitude.

insight • 6 days ago • Via Artificial Intelligence Made Simple •

AI Adoption Barriers. Until the liabilities and responsibilities of AI models for medicine are clearly spelled out via regulation or a ruling, the default assumption of any doctor is that if AI makes an error, the doctor is liable for that error, not the AI.

insight • 6 days ago • Via Artificial Intelligence Made Simple •

AI in Clinical Diagnosis. Doctors bear a lot of risk for using AI, while model developers don’t.

insight • 6 days ago • Via Artificial Intelligence Made Simple •

Freedom of Speech Analysis. Tobias Jensen discusses content moderation on social media platforms and recent cases which trend towards preventing the harms that can (and has) been caused by social media messages not being regulated properly.

insight • 6 days ago • Via Artificial Intelligence Made Simple • futuristiclawyer.com

Highlighting Important Works. I’m going to highlight only two since they bring up extremely important discussions, and I want to get your opinions on them.

insight • 6 days ago • Via Artificial Intelligence Made Simple •

Next Planned Articles. Boeing, DEI, and 9 USD Engineers.

data point • 6 days ago • Via Artificial Intelligence Made Simple •

Survey Participation. Fred Graver is looking into understanding the demand for content around AI and is asking people to fill out a survey.

recommendation • 6 days ago • Via Artificial Intelligence Made Simple • www.reddit.com

Community Engagement. We started an AI Made Simple Subreddit.

data point • 6 days ago • Via Artificial Intelligence Made Simple • www.reddit.com

Ilya Sutskever Funding. Safe Superintelligence (SSI), an AI startup co-founded by Ilya Sutskever, has successfully raised over $1 billion in funding.

data point • 1 week ago • Via Last Week in AI • techcrunch.com

OpenAI AI Chips. OpenAI is reportedly planning to build its own AI chips using TSMC's forthcoming 1.6nm A16 process node, according to United Daily News.

data point • 1 week ago • Via Last Week in AI • www.yahoo.com

California AI Bill. The controversial California bill SB 1047, aimed at preventing AI disasters, has passed the state's Senate and is now awaiting Governor Gavin Newsom's decision.

data point • 1 week ago • Via Last Week in AI • www.nytimes.com

iPhone 16 Launch. Apple has unveiled its iPhone 16 line, which includes the iPhone 16, iPhone 16 Plus, iPhone 16 Pro, and iPhone 16 Pro Max, all designed with the Apple Intelligence mind.

data point • 1 week ago • Via Last Week in AI • finance.yahoo.com

Waymo Collision Data. Waymo's driverless cars have been involved in fewer injury-causing crashes per million miles of driving than human-driven vehicles.

data point • 1 week ago • Via Last Week in AI • www.understandingai.org

AI Image Creation. AI has led to the creation of over 15 billion images since 2022, with an average of 34 million images being created per day.

data point • 1 week ago • Via Last Week in AI • journal.everypixel.com

Global AI Treaty. US, EU, and UK sign the world's first international AI treaty, emphasizing human rights and democratic values as key to regulating public and private-sector AI models.

data point • 1 week ago • Via Last Week in AI • cointelegraph.com

Music Producer Arrested. Music producer arrested for using AI and bots to boost streams and generate AI music, facing charges of money laundering and wire fraud.

insight • 1 week ago • Via Last Week in AI • www.edmtunes.com

AI in Healthcare. Google DeepMind has launched AlphaProteo, an AI system that generates novel proteins to accelerate research in drug design, disease understanding, and health applications.

data point • 1 week ago • Via Last Week in AI • analyticsindiamag.com

Call for Clarity. In an ideal world, moderators would demand clarity on candidates' policies around AI.

recommendation • 1 week ago • Via Gary Marcus on AI •

AI Impacts on Society. AI is likely to change the world in coming years, affecting virtually every aspect of society, from employment to education to healthcare to national defense.

insight • 1 week ago • Via Gary Marcus on AI •

Candidates' AI Plans. It would be a really good time to demand better [AI policies] from candidates; if we don’t, future generations may regret it.

recommendation • 1 week ago • Via Gary Marcus on AI •

AI Policy Neglect. A total neglect of AI policy would be deeply unfortunate; our long-term future may actually be shaped more by AI policy than tariffs.

insight • 1 week ago • Via Gary Marcus on AI •

Future Responsibility. It will be our fault if candidates don’t address AI policy; they certainly aren’t going to bother to talk about it if we don’t let them know it matters.

insight • 1 week ago • Via Gary Marcus on AI •

Vulnerability of Teens. Nonconsensual deep fake porn may especially affect the already vulnerable population of teenage girls, who have been harmed by social media.

insight • 1 week ago • Via Gary Marcus on AI •

Training with MAE. Mean Absolute Error (MAE) is used as the training objective, which is robust to outliers.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

Predictive Modeling Framework. The authors have created a fine-tuning process that allows Aurora to excel at both short-term and long-term predictions.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

Replay Buffer Mechanism. Aurora implements a replay buffer, allowing the model to learn from its own predictions, improving long-term stability.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

Energy-Efficient Fine-Tuning. LoRA introduces small, trainable matrices to the attention layers, allowing Aurora to fine-tune efficiently while significantly reducing memory usage.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

Variable Weighting Methodology. Aurora uses variable weighting, where different weights are assigned to different variables in the loss function to balance their contributions.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

Rollout Fine-tuning Importance. Rollout fine-tuning addresses the challenge by training Aurora on sequences of multiple predictions, simulating the chain reaction of weather events over time.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

U-Net Architecture. The U-Net architecture allows for multi-scale processing, enabling the model to simultaneously understand local weather patterns and larger-scale atmospheric phenomena.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

Swin Transformer Benefits. Swin Transformers excel at capturing long-range dependencies and scaling to large datasets, which is crucial for weather modeling.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

Impact of Underreporting. Aurora got almost no attention, indicating a serious misplacement of priorities in the AI Community.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

Community Awareness Gap. The ability of foundation models to excel at downstream tasks with scarce data could democratize access to accurate weather and climate information in data-sparse regions, such as the developing world and polar regions.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

Sandstorm Prediction. Aurora was able to predict a vicious sandstorm a day in advance, which can be used in the future for evacuations and disaster planning.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

Limited Data Handling. Aurora leverages the strengths of the foundation modelling approach to produce operational forecasts for a wide variety of atmospheric prediction problems, including those with limited training data, heterogeneous variables, and extreme events.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

Advanced Predictive Capabilities. In under a minute, Aurora produces 5-day global air pollution predictions and 10-day high-resolution weather forecasts that outperform state-of-the-art classical simulation tools and the best specialized deep learning models.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

Foundation Model Size. Aurora is a 1.3 Billion Foundation Model for environmental forecasting.

data point • 1 week ago • Via Artificial Intelligence Made Simple • www.microsoft.com

.

• 1 week ago • Via Simon Willison on Mastodon •

.

• 1 week ago • Via Simon Willison on Mastodon •

Learning Budget. Many companies have a learning budget that you can expense this newsletter to.

data point • 1 week ago • Via Artificial Intelligence Made Simple •

Expert Invitations. In the series Guests, I will invite these experts to come in and share their insights on various topics that they have studied/worked on.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

Chocolate Milk Cult. Our chocolate milk cult has a lot of experts and prominent figures doing cool things.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

Infrastructure Creation. AI application will not generate a net positive ROI on infrastructure buildout for some time.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

AI Model Revenues. Our best indication of AI app revenue comes from model revenue (OpenAI at an estimated $1.5B in API revenue).

data point • 1 week ago • Via Artificial Intelligence Made Simple •

Energy Demand Increase. Demand is increasing, and the question is what bottlenecks will be alleviated to fulfill that demand.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

Data Center Demand. Theoretically, value should flow through the traditional data center value chain.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

AI Total Expenditures. The cloud revenue gives us the real indication of how much value is being invested into AI applications.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

AI Application Revenue. AI applications have generated a very rough estimate of $20B in revenue with multiples higher than that in value creation so far.

data point • 1 week ago • Via Artificial Intelligence Made Simple •

Nvidia Revenue. Last quarter, Nvidia did $26.3B in data center revenue, with $3.7B of that coming from networking.

data point • 1 week ago • Via Artificial Intelligence Made Simple •

Power Scarcity. They’ll do this themselves or through a developer like QTS, Vantage, or CyrusOne.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

Compute Power Concerns. All three hyperscalers noted they’re capacity-constrained on AI compute power.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

Application Value. ROI on AI will ultimately be driven by application value to end users.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

Hyperscaler Decisions. Hyperscalers are making the right CapEx business decisions.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

No Clear ROI. There’s not a clear ROI on AI investments right now.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

AI ROI Debate. For the first time in a year and a half, common opinion is now shifting to the narrative 'Hyperscaler spending is crazy. AI is a bubble.'

insight • 1 week ago • Via Artificial Intelligence Made Simple •

CapEx Growth. Amazon, Google, Microsoft, and Meta have spent a combined $177B on capital expenditures over the last four quarters.

data point • 1 week ago • Via Artificial Intelligence Made Simple •

100K Readers. Help me democratize the most important ideas in AI Research and Engineering to over 100K readers weekly.

data point • 1 week ago • Via Artificial Intelligence Made Simple •

Long-Term Value Creation. Value will be created in unforeseen ways.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

.

• 1 week ago • Via Simon Willison on Mastodon •

LLM Performance Restrictions. Imposing formatting restrictions on LLMs leads to performance degradation, impacting reasoning abilities significantly.

insight • 1 week ago • Via Artificial Intelligence Made Simple • arxiv.org

Standardizing Text Diversity. This work empirically investigates diversity scores on English texts and provides a diversity score package to facilitate research.

insight • 1 week ago • Via Artificial Intelligence Made Simple • arxiv.org

Impact of LLMs on Diversity. Writing with InstructGPT results in a statistically significant reduction in diversity.

insight • 1 week ago • Via Artificial Intelligence Made Simple • arxiv.org

Dimension Insensitive Metric. This paper introduces the Dimension Insensitive Euclidean Metric (DIEM) which demonstrates superior robustness and generalizability across dimensions.

insight • 1 week ago • Via Artificial Intelligence Made Simple • arxiv.org

Support for Writing. Doing so helps me put more effort into writing/research, reach more people, and supports my crippling chocolate milk addiction.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

Community Engagement. We started an AI Made Simple Subreddit.

data point • 1 week ago • Via Artificial Intelligence Made Simple • www.reddit.com

Notable Content Creator. Artem Kirsanov produces high-quality videos on computational neuroscience and AI, and offers very new ideas/perspectives for traditional Machine Learning people.

insight • 1 week ago • Via Artificial Intelligence Made Simple • www.youtube.com

AI Content Focus. The focus will be on AI and Tech, but ideas might range from business, philosophy, ethics, and much more.

insight • 1 week ago • Via Artificial Intelligence Made Simple •

Previews of Articles. Upcoming articles include 'The Economics of ESports' and 'The economics of Open Source.'

data point • 1 week ago • Via Artificial Intelligence Made Simple •

New Paradigms in NLP. Sebastian Raschka discusses recent pre-training and post-training paradigms in NLP models, highlighting significant new techniques.

insight • 1 week ago • Via Artificial Intelligence Made Simple • magazine.sebastianraschka.com

Risks of Synthetic Training. Training language models on synthetic data leads to a consistent decrease in the diversity of the model outputs through successive iterations.

insight • 1 week ago • Via Artificial Intelligence Made Simple • arxiv.org

.

• 2 weeks ago • Via Simon Willison on Mastodon •

.

• 2 weeks ago • Via Simon Willison on Mastodon •

OpenAI's New Deal. Ars Technica content is now available in OpenAI services.

insight • 2 weeks ago • Via Last Week in AI • arstechnica.com

Authors' Lawsuit. Authors sue Claude AI chatbot creator Anthropic for copyright infringement.

insight • 2 weeks ago • Via Last Week in AI • abcnews.go.com

California AI Bill Weakening. California weakens bill to prevent AI disasters before final vote, taking advice from Anthropic.

insight • 2 weeks ago • Via Last Week in AI • techcrunch.com

Anysphere Funding. Anysphere, a GitHub Copilot rival, has raised $60M Series A at $400M valuation from a16z, Thrive, sources say.

insight • 2 weeks ago • Via Last Week in AI • techcrunch.com

AMD Acquisition. AMD buying server maker ZT Systems for $4.9 billion as chipmakers strengthen AI capabilities.

insight • 2 weeks ago • Via Last Week in AI • abcnews.go.com

California Regulation. Analysis of California's AI regulation bill SB1047 and legal issues related to synthetic media, copyright, and online personhood credentials.

insight • 2 weeks ago • Via Last Week in AI •

AI Model Scaling. Exploration of the feasibility and investment needed for scaling advanced AI models like GPT-4 and Agent Q architecture enhancements.

insight • 2 weeks ago • Via Last Week in AI •

Perplexity Updates. Perplexity's integration of Flux image generation models and code interpreter updates for enhanced search results.

insight • 2 weeks ago • Via Last Week in AI •

New AI Features. Ideogram AI's new features, Google's Imagine 3, Dream Machine 1.5, and Runway's Gen3 Alpha Turbo model advancements.

insight • 2 weeks ago • Via Last Week in AI •

Episode Summary. Our 180th episode with a summary and discussion of last week's big AI news!

insight • 2 weeks ago • Via Last Week in AI •

.

• 2 weeks ago • Via Simon Willison on Mastodon •

.

• 2 weeks ago • Via Simon Willison on Mastodon •

.

• 2 weeks ago • Via Simon Willison on Mastodon •

Mental Health and Misinformation. We cry for the government or social media companies to do something about worsening mental health and the spread of misinformation, but how many of us have acted positively on these platforms?

insight • 2 weeks ago • Via Artificial Intelligence Made Simple •

Democracy and Conformity. Tocqueville observed that democratic societies foster a sense of equality among citizens, which can lead to pressure for conformity, homogenizing thought, expression, and behavior.

insight • 2 weeks ago • Via Artificial Intelligence Made Simple •

Over-Reliance on Institutions. Tocqueville noticed a tendency for citizens to increasingly rely on the government under the expectation that an elected government should solve societal problems.

insight • 2 weeks ago • Via Artificial Intelligence Made Simple •

Personal Responsibility. We often expect institutions to make systemic changes without acknowledging the importance of individual responsibility in taking actions that lead to systemic change.

insight • 2 weeks ago • Via Artificial Intelligence Made Simple •

Need for Critical Diversity. When people lose exposure to diverse viewpoints, their capacity to visualize alternatives diminishes, reinforcing conformity.

insight • 2 weeks ago • Via Artificial Intelligence Made Simple •

Intellectual Homogeneity. A populace that is intellectually homogenous tends to rely on external sources for solutions, sacrificing personal agency and responsibility.

insight • 2 weeks ago • Via Artificial Intelligence Made Simple •

Social Media Trends. Advice for content creators often revolves around imitating successful content rather than fostering unique voices, contributing to conformity.

insight • 2 weeks ago • Via Artificial Intelligence Made Simple •

Conformity in Media. Social media and content creation platforms, initially designed for authentic expression, often lead to a relentless drive toward sameness and conformity.

insight • 2 weeks ago • Via Artificial Intelligence Made Simple •

Collective Action Importance. The OSS movement in tech allows people to find their communities and contribute, emphasizing the importance of collective small contributions leading to significant shifts.

insight • 2 weeks ago • Via Artificial Intelligence Made Simple •

Voluntary Associations. Tocqueville noted that Americans constantly form associations for various purposes, which serve as a powerful tool for collective action and public benefit.

insight • 2 weeks ago • Via Artificial Intelligence Made Simple •

Local Community Power. Tocqueville saw voluntary organizations and local community groups as crucial to counterbalance the negative tendencies of democracy.

insight • 2 weeks ago • Via Artificial Intelligence Made Simple •

Tyranny of the Majority. In modern democracies, tyranny manifests through social ostracism rather than physical oppression, leading to self-censorship and a society of self-oppressors.

insight • 2 weeks ago • Via Artificial Intelligence Made Simple •

Agency and Accountability. Tocqueville emphasizes the importance of people accepting agency and accountability for their information diet instead of relying on institutions.

insight • 2 weeks ago • Via Artificial Intelligence Made Simple •

.

• 3 weeks ago • Via Simon Willison on Mastodon •

Efficient Small Models. Nvidia's Llama-3.1-Minitron 4B performs comparably to larger models while being more efficient to train and deploy.

insight • 3 weeks ago • Via Last Week in AI • venturebeat.com

Open-Source AI Definition. Open-source AI is defined as a system that can be used, inspected, modified, and shared without restrictions.

insight • 3 weeks ago • Via Last Week in AI • www.technologyreview.com

Authors Sue Anthropic. Authors are suing AI startup Anthropic for using pirated texts to train its chatbot Claude, alleging large-scale theft.

insight • 3 weeks ago • Via Last Week in AI • abcnews.go.com

AI Ethical Concerns. Google DeepMind employees are urging the company to end military contracts due to concerns about AI technology used for warfare.

insight • 3 weeks ago • Via Last Week in AI • www.theverge.com

AI in Ad Creation. Creatopy, which automates ad creation using AI, has raised $10 million and now serves over 5,000 brands and agencies.

insight • 3 weeks ago • Via Last Week in AI • techcrunch.com

Google's AI Image Generator. Google has released a powerful AI image generator, Imagen 3, for free use in the U.S., outperforming other models.

insight • 3 weeks ago • Via Last Week in AI • petapixel.com

Content Partnership. OpenAI has partnered with Condé Nast to display content from its publications within AI products like ChatGPT and SearchGPT.

insight • 3 weeks ago • Via Last Week in AI • arstechnica.com

OpenAI's Regulatory Stance. OpenAI has opposed the proposed AI bill SB 1047 aimed at implementing safety measures, despite public support for regulation.

insight • 3 weeks ago • Via Last Week in AI • www.windowscentral.com

California AI Regulation. Anthropic's CEO supports California's AI bill SB 1047, stating the benefits outweigh the costs, despite some concerns.

insight • 3 weeks ago • Via Last Week in AI • www.pcmag.com

AI for Coding Tasks. Open source Dracarys models are specifically designed to optimize coding tasks and significantly improve performance of existing models.

insight • 3 weeks ago • Via Last Week in AI • venturebeat.com

Advanced Long-Context Models. AI21's Jamba 1.5 Large model has demonstrated superior performance in latency tests against similar models.

insight • 3 weeks ago • Via Last Week in AI • finance.yahoo.com

Outperforming Competitors. Microsoft's Phi-3.5 outperforms other small models from Google, OpenAI, Mistral, and Meta on several key metrics.

insight • 3 weeks ago • Via Last Week in AI • www.tomsguide.com

.

• 3 weeks ago • Via Simon Willison on Mastodon •

.

• 3 weeks ago • Via Simon Willison on Mastodon •

.

• 3 weeks ago • Via Simon Willison on Mastodon •

.

• 3 weeks ago • Via Simon Willison on Mastodon •

.

• 3 weeks ago • Via Simon Willison on Mastodon •

End User Engagement. Users can inspect multiple alternative paths to verify the quality of secondary/tertiary relationships.

insight • 3 weeks ago • Via Artificial Intelligence Made Simple •

Obsession with User Feedback. I’d be lying if I said that there is one definitive approach (or that what we’ve done is absolutely the best approach).

insight • 3 weeks ago • Via Artificial Intelligence Made Simple •

Machine Learning in Legal Domain. These are the main aspects of the text-based search/embedding that are promising based on research and our own experiments.

insight • 3 weeks ago • Via Artificial Intelligence Made Simple •

High Cost of Mistakes. A mistake can cost a firm millions of dollars in settlements and serious loss of reputation. This high cost justifies the investment into better tools.

insight • 3 weeks ago • Via Artificial Intelligence Made Simple •

Cost of Legal Expertise. Legal Expertise is expensive. If a law firm can cut down the time required for a project by even a few hours, they are already looking at significant savings.

data point • 3 weeks ago • Via Artificial Intelligence Made Simple •

Importance of RAG. RAG is one of the most important use-cases for LLMs, and the goal is to build the best RAG systems possible.

insight • 3 weeks ago • Via Artificial Intelligence Made Simple •

Optimizations in Distance Measurement. FINGER significantly outperforms existing acceleration approaches and conventional libraries by 20% to 60% across different benchmark datasets.

insight • 3 weeks ago • Via Artificial Intelligence Made Simple •

Integration of Graph-Based Indexes. Given that we’re already working on graphs, another promising direction for us has been integrating graph-based indexes and search.

insight • 3 weeks ago • Via Artificial Intelligence Made Simple •

User Verification. By letting our users both verify and edit each step of the AI process, we let them make the AI adjust to their knowledge and insight, instead of asking them to change for the tool.

insight • 3 weeks ago • Via Artificial Intelligence Made Simple •

Focus on Transparency. Model transparency is crucial as a few trigger words/phrases can change the meaning/implication of a clause; users need to have complete insight into every step of the process.

insight • 3 weeks ago • Via Artificial Intelligence Made Simple •

Leveraging Control Tokens. We use control tokens, which are special tokens to indicate different types of elements, enhancing our tokenization process.

insight • 3 weeks ago • Via Artificial Intelligence Made Simple •

Flexible Indexing Approach. Updating the indexes with new information is much cheaper than retraining your entire AI model. Index-based search also allows us to see which chunks/contexts the AI picks to answer a particular query.

insight • 3 weeks ago • Via Artificial Intelligence Made Simple •

Hallucinations in AI. Type 1 Hallucinations are not a worry because our citations are guaranteed to be from the data source, and Type 2 Hallucinations will be reduced significantly through our unique process of constant refinement.

insight • 3 weeks ago • Via Artificial Intelligence Made Simple •

Reducing Costs. Relying on a smaller, Mixture of experts style setup instead of letting bigger models do everything reduces our costs dramatically, allowing us to do more with less.

insight • 3 weeks ago • Via Artificial Intelligence Made Simple •

Focus on User Feedback. Our unique approach to involving the user in the generation process leads to a beautiful pair of massive wins against Hallucinations.

insight • 3 weeks ago • Via Artificial Intelligence Made Simple •

Flexibility in Architecture. The best architecture is useless if it can't fit into your client's processes. Being Lawyer-Led, IQIDIS understands the importance of working within a lawyer's/firm's workflow.

insight • 3 weeks ago • Via Artificial Intelligence Made Simple •

KI-RAG Challenges. Building KI-RAG systems requires a lot more handling and constant maintenance, making them more expensive than traditional RAG.

insight • 3 weeks ago • Via Artificial Intelligence Made Simple •

Handling Legal Nuances. There is a lot of nuance to Law. Laws can change between regions, different sub-fields weigh different factors, and a lot of law is done in the gray areas.

insight • 3 weeks ago • Via Artificial Intelligence Made Simple •

Need for Higher Adaptability. Building upon this is a priority after our next round of fund-raising (or for any client that specifically requests this).

recommendation • 3 weeks ago • Via Artificial Intelligence Made Simple •

OpenAI's Opposition. OpenAI has just announced that it is opposed to California's SB-1047 despite Altman's public support for AI regulation at the Senate.

insight • 3 weeks ago • Via Gary Marcus on AI • www.theverge.com

Legislation Improvement. Saunders did not think SB-1047 was perfect but says the proposed legislation was the best attempt I've seen to provide a check on this power.

insight • 3 weeks ago • Via Gary Marcus on AI •

Power Corrupts. If we don't figure out the governance problem, internal and external, before the next big AI advance, we could be in serious trouble.

insight • 3 weeks ago • Via Gary Marcus on AI •

Timelines for AGI. Saunders thinks it is at least somewhat plausible we will see AGI in a few years; I do not.

insight • 3 weeks ago • Via Gary Marcus on AI •

Need for Regulation. If OpenAI (and others in Silicon Valley) succeed in torpedoing SB-1047, self-regulation is in many ways what we will be left with.

insight • 3 weeks ago • Via Gary Marcus on AI •

Call for Accountable Power. Saunders described as a metaprinciple, 'Don't give power to people or structures that can't be held accountable.'

insight • 3 weeks ago • Via Gary Marcus on AI •

Future Whistleblower Protections. One of the most important reasons for passing SB-1047 in California was its whistleblower protections.

insight • 3 weeks ago • Via Gary Marcus on AI • digitaldemocracy.calmatters.org

External Oversight Needed. There should be a role for external governance, as well: companies should not be able to make decisions of potentially enormous magnitude on their own.

insight • 3 weeks ago • Via Gary Marcus on AI •

Governance Concerns. Internal governance is key; it shouldn't be just one person at the top of one company calling the shots for all humanity.

insight • 3 weeks ago • Via Gary Marcus on AI •

Employee Discontent. Promises have been made and not kept; they lost faith in Altman personally, and have lost faith in the company's commitment to AI safety.

insight • 3 weeks ago • Via Gary Marcus on AI •

.

• 3 weeks ago • Via Simon Willison on Mastodon •

Image Generation Capabilities. Grok has also integrated FLUX.1 by Black Forest Labs to enable users to generate images.

data point • 4 weeks ago • Via Last Week in AI • www.theverge.com

Premium Access. Access to Grok is currently limited to Premium and Premium+ users.

insight • 4 weeks ago • Via Last Week in AI • techcrunch.com

Grok-2 Release. Elon Musk's company, X, has launched Grok-2 and Grok-2 mini in beta, both of which are AI models capable of generating images on the X social network.

data point • 4 weeks ago • Via Last Week in AI • techcrunch.com

Deepfake Scams. Elderly retiree loses over $690,000 to digital scammers using AI-powered deepfake videos of Elon Musk to promote fraudulent investment opportunities.

insight • 4 weeks ago • Via Last Week in AI • www.nytimes.com

AI Codec Proposal. Using canonical codec representations like JPEG, this article proposes a method to directly model images and videos as compressed files, showing its effectiveness in image generation.

recommendation • 4 weeks ago • Via Last Week in AI • arxiv.org

Procreate Stance. Procreate vows to never incorporate generative AI into its products, taking a stand against the technology.

data point • 4 weeks ago • Via Last Week in AI • techcrunch.com

US AI Lead. US leads in AI investment and job postings, surpassing China and other countries.

insight • 4 weeks ago • Via Last Week in AI • www.foxnews.com

AI Image Licensing. OpenAI CEO's warning about the use of copyrighted content in AI models is highlighted as Anthropic faces a lawsuit for training its Claude AI model using authors' work without consent.

insight • 4 weeks ago • Via Last Week in AI • www.windowscentral.com

AI Risks Repository. MIT researchers release a comprehensive AI risk repository to guide policymakers and stakeholders in understanding and addressing the diverse and fragmented landscape of AI risks.

data point • 4 weeks ago • Via Last Week in AI • techcrunch.com

Research Automation Phases. The AI Scientist operates in three phases: idea generation, experimental iteration, and paper write-up.

insight • 4 weeks ago • Via Last Week in AI • www.marktechpost.com

AI Scientist Development. "The AI Scientist" is a novel AI system designed to automate the entire scientific research process.

data point • 4 weeks ago • Via Last Week in AI • www.marktechpost.com

AI Artist Claim Approved. The judge allowed a copyright claim against DeviantArt, which used a model based on Stable Diffusion.

insight • 4 weeks ago • Via Last Week in AI • www.theverge.com

Lawsuit Progress. The lawsuit against AI companies Stability and Midjourney, filed by a group of artists alleging copyright infringement, has gained traction as Judge William Orrick approved additional claims.

insight • 4 weeks ago • Via Last Week in AI • www.theverge.com

Conversational Features. Gemini Live can also interpret video in real time and function in the background or when the phone is locked.

recommendation • 4 weeks ago • Via Last Week in AI • www.theverge.com

Gemini Live Introduction. Google has introduced a new voice chat mode for its AI assistant, Gemini, named Gemini Live.

data point • 4 weeks ago • Via Last Week in AI • www.theverge.com

Image Tolerance. Compared to other image generators on the market, the model is far more permissive with regards to what images it can generate.

insight • 4 weeks ago • Via Last Week in AI • www.theverge.com

AI-driven Features. The company plans to deploy Grok-2 and Grok-2 mini in AI-driven features on X, including improved search capabilities, post analytics, and reply functions.

recommendation • 4 weeks ago • Via Last Week in AI • techcrunch.com

Shoutout.io Page. Shoutout.io is a very helpful tool that allows independent creators to gather testimonials in one place.

data point • 4 weeks ago • Via Artificial Intelligence Made Simple • redirect.medium.systems

Research Engineer Openings. Haize Labs is looking for research scientists to join their teams based in NYC.

data point • 4 weeks ago • Via Artificial Intelligence Made Simple • job-boards.greenhouse.io

Encouragement to Apply. We encourage you to apply even if you do not believe you meet every single qualification: We’re open to considering a wide range of perspectives and experiences.

insight • 4 weeks ago • Via Artificial Intelligence Made Simple •

Guest Posts Initiative. I want to integrate more guest posts in this newsletter to cover a greater variety of topics and hear from experts across the board.

recommendation • 4 weeks ago • Via Artificial Intelligence Made Simple •

Case Study Articles. I’d like to do more case-study-style articles, where we look into different organizations to study how they solved their business/operational challenges with AI.

recommendation • 4 weeks ago • Via Artificial Intelligence Made Simple •

Prompt Caching Launch. Prompt Caching is Now Available on the Anthropic API for Specific Claude Models.

data point • 4 weeks ago • Via Last Week in AI • www.marktechpost.com

Deepfake Scams. How ‘Deepfake Elon Musk’ Became the Internet's Biggest Scammer.

data point • 4 weeks ago • Via Last Week in AI • www.nytimes.com

FCC AI Robocall Rules. FCC Proposes New Rules on AI-Powered Robocalls.

data point • 4 weeks ago • Via Last Week in AI • www.pymnts.com

MIT AI Risks Repository. MIT researchers release a repository of AI risks.

data point • 4 weeks ago • Via Last Week in AI • techcrunch.com

Popular AI Search Startup. Perplexity's popularity surges as AI search start-up takes on Google.

data point • 4 weeks ago • Via Last Week in AI • www.ft.com

AI Search Evolution. Google's AI-generated search summaries change how they show their sources.

data point • 4 weeks ago • Via Last Week in AI • www.theverge.com

Risks of Unaligned AI. Overview of potential risks of unaligned AI models and skepticism around SingularityNet's AGI supercomputer claims.

insight • 4 weeks ago • Via Last Week in AI •

Huawei's AI Chip. Huawei's Ascend 910C AI chip aims to rival NVIDIA's H100 amidst US export controls.

data point • 4 weeks ago • Via Last Week in AI •

Google Voice Chat Feature. Google introduces Gemini Voice Chat Mode available to subscribers and integrates it into Pixel Buds Pro 2.

data point • 4 weeks ago • Via Last Week in AI •

Grok 2 Beta Release. Grok 2's beta release features new image generation using Black Forest Labs' tech.

data point • 4 weeks ago • Via Last Week in AI •

Legal Standards. The new form of SB 1047 can basically only be used after something really bad happens, as a tool to hold companies liable, rather than prevent risks.

insight • 4 weeks ago • Via Gary Marcus on AI •

Regulatory Fight. Most or all of the major big tech companies joined a lobbying organization that fought SB-1047, despite broad public support for the bill.

data point • 4 weeks ago • Via Gary Marcus on AI •

Bill Weakened. California's SB-1047 was significantly weakened in last-minute negotiations, affecting its ability to address catastrophic risks.

insight • 4 weeks ago • Via Gary Marcus on AI •

Need for Federal Legislation. Future state and federal efforts may suffer if the bill doesn't pass, showing that comprehensive regulatory efforts are needed at all levels.

recommendation • 4 weeks ago • Via Gary Marcus on AI •

Comprehensive Approach Needed. We need a comprehensive approach to AI regulation, as SB 1047 is just a start in addressing various risks associated with AI.

recommendation • 4 weeks ago • Via Gary Marcus on AI •

Innovative Balance. Passing SB-1047 may normalize the regulation of AI while allowing for continued innovation, showing that safety precautions are compatible with industry growth.

insight • 4 weeks ago • Via Gary Marcus on AI •

Whistleblower Protections. The bill provides important whistleblower protections, which are critical for transparency and accountability in AI companies.

insight • 4 weeks ago • Via Gary Marcus on AI •

Deterrent Value. SB-1047's strongest utility may come as a deterrent, clarifying that the duty to take reasonable care applies to AI developers.

insight • 4 weeks ago • Via Gary Marcus on AI •

Weak Assurance. The 'reasonable care' standard may be too weak, as billion-dollar companies might exploit it without facing meaningful consequences.

insight • 4 weeks ago • Via Gary Marcus on AI •

Narrow Focus. SB 1047 seems heavily skewed toward addressing hypothetical existential risks while largely ignoring demonstrable AI risks like misinformation and discrimination.

insight • 4 weeks ago • Via Gary Marcus on AI •

High Subscription Importance. Help me democratize the most important ideas in AI Research and Engineering to over 100K readers weekly.

insight • 4 weeks ago • Via Artificial Intelligence Made Simple •

Experts in Chocolate Milk. Our chocolate milk cult has a lot of experts and prominent figures doing cool things.

data point • 4 weeks ago • Via Artificial Intelligence Made Simple •

Access to Justice Correlation. We can find a strong correlation between the fairness and independence of the court system and the general life quality and well-being of its populace.

insight • 4 weeks ago • Via Artificial Intelligence Made Simple •

AI Speed vs Court Speed. High tech runs three-times faster than normal businesses, and the government runs three times slower than normal businesses.

data point • 4 weeks ago • Via Artificial Intelligence Made Simple •

Judicial System's Importance. The court system undertakes a vitally important function in society as a central governance mechanism.

insight • 4 weeks ago • Via Artificial Intelligence Made Simple •

AI Adoption by Courts. The Attorney General's Office of São Paulo adopted GPT-4 last year to speed up the screening and reviewing process of lawsuits.

data point • 4 weeks ago • Via Artificial Intelligence Made Simple • news.microsoft.com

Cautious AI Implementation. Hallucination risks and security and data confidentiality concerns call for tremendous caution and common sense when using and implementing AI tools.

insight • 4 weeks ago • Via Artificial Intelligence Made Simple •

Impact on Legal Services. Legal copilots will inevitably drive down the price of legal services and make legal knowledge more accessible to non-lawyers.

insight • 4 weeks ago • Via Artificial Intelligence Made Simple •

Legal AI Tools' Future. The legal copilots that will succeed should be developed and branded with a focus on time-savings and productivity benefits.

recommendation • 4 weeks ago • Via Artificial Intelligence Made Simple •

Changing Nature of Legal Work. AI-driven tools will take care of routine, monotone tasks so lawyers can focus more on the strategic, high-value work.

insight • 4 weeks ago • Via Artificial Intelligence Made Simple •

AI Use in Legal Sector. 73% of 700 lawyers planned to utilize generative AI in their legal work within the next year.

data point • 4 weeks ago • Via Artificial Intelligence Made Simple •

Keynote Video. Here’s the video (well-produced by Machine Learning Street Talk (MLST) of a talk I gave on Friday, as a keynote at AGI-Summit 24.

data point • 1 month ago • Via Gary Marcus on AI • agi-conf.org

GPT-5 Not Released. And no, GPT-5 did not drop this week as many had hoped.

insight • 1 month ago • Via Gary Marcus on AI •

Differing Views. Interesting to see where his take and mine differ.

insight • 1 month ago • Via Gary Marcus on AI • x.com

Thoughts on Regulation. My thoughts on regulation are of course coming soon, in my next book (Taming Silicon Valley, now available for pre-order).

recommendation • 1 month ago • Via Gary Marcus on AI •

AI Winter Speculation. As for whether there is an AI winter coming, time will tell.

insight • 1 month ago • Via Gary Marcus on AI •

Expectations Reframing. At the very least, I foresee a significant reframing of expectations.

insight • 1 month ago • Via Gary Marcus on AI •

Audio Version Available. There is also an audio only version, here.

data point • 1 month ago • Via Gary Marcus on AI • podcasters.spotify.com

.

• 1 month ago • Via Simon Willison on Mastodon •

New Humanoid Robot. Figure's new humanoid robot leverages OpenAI for natural speech conversations.

recommendation • 1 month ago • Via Last Week in AI • techcrunch.com

UK Merger Probe. Amazon faces UK merger probe over $4B Anthropic AI investment.

recommendation • 1 month ago • Via Last Week in AI • cointelegraph.com

Google Antitrust Ruling. Google Monopolized Search Through Illegal Deals, Judge Rules.

recommendation • 1 month ago • Via Last Week in AI • www.bloomberg.com

California AI Bill Impact. 'The Godmother of AI' says California's well-intended AI bill will harm the U.S. ecosystem.

recommendation • 1 month ago • Via Last Week in AI • fortune.com

OpenAI Co-founder Exit. OpenAI co-founder Schulman leaves for Anthropic, Brockman takes extended leave.

recommendation • 1 month ago • Via Last Week in AI • techcrunch.com

Adept AI Returns. Investors in Adept AI will be paid back after Amazon hires startup's top talent.

recommendation • 1 month ago • Via Last Week in AI • www.semafor.com

Character.AI Founders. Google's hiring of Character.AI's founders is the latest sign that part of the AI startup world is starting to implode.

recommendation • 1 month ago • Via Last Week in AI • fortune.com

Compute Efficiency Research. Research advancements such as Google's compute-efficient inference models and self-compressing neural networks, showcasing significant reductions in compute requirements while maintaining performance.

data point • 1 month ago • Via Last Week in AI •

Humanoid Robotics Advances. Rapid advancements in humanoid robotics exemplified by new models from companies like Figure in partnership with OpenAI, achieving amateur-level human performance in tasks like table tennis.

data point • 1 month ago • Via Last Week in AI •

OpenAI Changes. OpenAI's dramatic changes with co-founder exits, extended leaves, and new lawsuits from Elon Musk.

data point • 1 month ago • Via Last Week in AI •

Personnel Movements. Notable personnel movements and product updates, such as Character.ai leaders joining Google and new AI features in Reddit and Audible.

data point • 1 month ago • Via Last Week in AI •

.

• 1 month ago • Via Simon Willison on Mastodon •

Facial Recognition Use Case. In the U.K., the London Metropolitan Police admitted to using facial recognition technology on tens of thousands of people attending King Charles III's coronation in May 2023.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Mass Surveillance Impact. A recent study in The Quarterly Journal of Economics suggests that fewer people protest when public safety agencies acquire AI surveillance software to complement their cameras.

data point • 1 month ago • Via Artificial Intelligence Made Simple • academic.oup.com

Multi-modal AI Concerns. Despite the potential of multi-modal AI, there is worry regarding its use in mass surveillance and automated weapon systems.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Emerging Adversarial Techniques. Transferability of adversarial examples between models and query-based attacks are vital strategies for black-box settings.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Evolutionary Strategies Potential. Evolutionary algorithms, such as genetic algorithms and differential evolution, show promise for generating adversarial perturbations.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Norm Considerations in Perturbation. Different norms (L1, L2, and L-infinity) significantly impact the outcome and effectiveness of adversarial perturbations.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Robust Features Importance. Training on just Robust Features leads to good results, suggesting a generalized extraction of robust features is a valuable future avenue for exploration.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Infectious Jailbreak Feasibility. Feeding an adversarial image into the memory of any randomly chosen agent can achieve infectious jailbreak, causing all agents to exhibit harmful behaviors exponentially fast.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Adversarial Perturbations Explained. Adversarial perturbations (AP) are subtle changes to images that can deceive AI classifiers by causing misclassification.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Agent Smith Attack. The Agent Smith setup involves simulating a multi-agent environment where a single adversarial image can lead to widespread harmful behaviors across almost all agents.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Artists' Lawsuit Progress. A class action lawsuit against AI companies Stability, Runway, and DeviantArt, filed by artists alleging copyright infringement, has been partially approved to proceed by a judge.

insight • 1 month ago • Via Last Week in AI •

Falcon Mamba 7B Launch. The Technology Innovation Institute (TII) has introduced Falcon Mamba 7B, a new large language model that uses a State Space Language Model (SSLM) architecture, marking a shift from traditional transformer-based designs.

data point • 1 month ago • Via Last Week in AI • www.maginative.com

Performance Verification. Falcon Mamba 7B has been independently verified by Hugging Face as the top-performing open-source SSLM globally, outperforming established transformer-based models in benchmark tests.

data point • 1 month ago • Via Last Week in AI •

Figure 02 Introduction. Figure has introduced its latest humanoid robot, Figure 02, which is designed to work alongside humans in a factory setting.

data point • 1 month ago • Via Last Week in AI • techcrunch.com

New Supercomputing Initiatives. A new supercomputing network aims to accelerate the development of artificial general intelligence (AGI) through a worldwide network of powerful computers.

data point • 1 month ago • Via Last Week in AI • www.livescience.com

AI Emotional Attachment Concerns. OpenAI is concerned about users developing emotional attachments to the GPT-4o chatbot, warning of potential negative impacts on human interactions.

insight • 1 month ago • Via Last Week in AI • www.techradar.com

AI Assistant at JPMorgan. JPMorgan Chase has rolled out a generative AI assistant to tens of thousands of its employees, designed to be as ubiquitous as Zoom.

data point • 1 month ago • Via Last Week in AI • www.cnbc.com

WeRide IPO Plans. WeRide, a Chinese autonomous vehicle company, is seeking a $5.02 billion valuation in its U.S. IPO, aiming to raise about $96 million from the offering.

data point • 1 month ago • Via Last Week in AI • techcrunch.com

AI-Driven 3D Generation. A research paper by scientists from Meta and Oxford University introduces VFusion3D, an AI-driven technique capable of generating high-quality 3D models from 2D images in seconds.

data point • 1 month ago • Via Last Week in AI • techcrunch.com

Instagram AI Features. Instagram's new AI features allow people to create AI versions of themselves.

data point • 1 month ago • Via Last Week in AI • www.theverge.com

Misinformation Impact. The impact of misinformation via deepfakes, particularly one involving Elon Musk, is also highlighted.

insight • 1 month ago • Via Last Week in AI • www.theverge.com

Open-Source AI Stance. The White House says there is no need to restrict 'open-source' artificial intelligence — at least for now.

insight • 1 month ago • Via Last Week in AI • www.wdtn.com

AI Law in Europe. The world's first-ever AI law is now enforced in Europe, targeting US tech giants.

data point • 1 month ago • Via Last Week in AI • www.vcpost.com

New AI Tools. Black Forest Labs releases Open-Source FLUX.1, a 12 Billion Parameter Rectified Flow Transformer capable of generating images from text descriptions.

data point • 1 month ago • Via Last Week in AI • www.marktechpost.com

NVIDIA Chip Issues. Nvidia reportedly delays its next AI chip due to a design flaw.

data point • 1 month ago • Via Last Week in AI • www.theverge.com

Waymo Rollout. Waymo's driverless cars have rolled out in San Francisco.

data point • 1 month ago • Via Last Week in AI • www.sfchronicle.com

AI News Summary. Hosts Andrey Kurenkov and John Krohn dive into significant updates and discussions in the AI world.

insight • 1 month ago • Via Last Week in AI •

Clarifications Requested. Concerns about inaccuracies in the essay lead to a request for reconsideration of the stance on SB-1047.

insight • 1 month ago • Via Gary Marcus on AI •

Kill Switch Misunderstanding. The 'kill switch' requirement doesn't apply to open-source models once they are out of the original developer's control.

insight • 1 month ago • Via Gary Marcus on AI •

Concerns on SB-1047. SB-1047 does not require predicting every use of an AI model, but focuses on specific, serious 'critical harms' such as mass casualties and large-scale cyberattacks.

insight • 1 month ago • Via Gary Marcus on AI •

Impact on Little Tech. Much of the bill's requirements are limited to models with training runs of $100 million+, which does not predominantly impact 'little-tech'.

insight • 1 month ago • Via Gary Marcus on AI •

Common Regulatory Standards. Asking for standards and a degree of care in AI is common across many industries, contrasting with the fewer regulations on AI systems that could pose catastrophic risks.

insight • 1 month ago • Via Gary Marcus on AI •

Need for Concrete Suggestions. While favoring AI governance, there are no positive, concrete suggestions offered for addressing risks such as mass casualties or large-scale cyberattacks.

insight • 1 month ago • Via Gary Marcus on AI •

.

• 1 month ago • Via Simon Willison on Mastodon •

Cost Considerations. While modern RAG (especially generator-heavy setups) are more expensive than V0, the general principle is still useful to keep in mind.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

RAG Definition. Retrieval Augmented Generation involves using AI to search a pre-defined knowledge base to answer user queries.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

RAG Advantages. RAG speeds this up by having the AI find relevant contexts and aggregate them.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

RAG System Recipes. The authors propose two distinct recipes for implementing RAG systems.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Integration Benefits. Query Classification Module leads to an average improvement in overall score from 0.428 to 0.443 and a reduction in latency time from 16.41 to 11.58 seconds per query.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

RAG vs Fine-Tuning. RAG outperforms fine-tuning with respect to injecting new sources of information into an LLM's responses.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Fine-Tuning Focus. It’s best to keep the learning/information mainly to the data indexing.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Retrieval Methods Findings. The authors recommend monoT5 as a comprehensive method balancing performance and efficiency.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Hybrid Retrieval Success. Hybrid search, combining sparse and dense retrieval with HyDE, achieves the best retrieval performance.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Chunking Strategy. Sentence-level chunking with a size of 512 tokens, using techniques like 'small-to-big' and 'sliding window', provides a good balance between information preservation and processing efficiency.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

BERT Accuracy. A BERT-based classifier achieved high accuracy (over 95%) in determining retrieval needs.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Query Classification. Decides if retrieval is needed for a given query, helping keep costs down.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

.

• 1 month ago • Via Simon Willison on Mastodon •

RAG vs. LLMs. When resourced sufficiently, long-context LLMs consistently outperform Retrieval Augmented Generation in terms of average performance.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Emergent Garden. Emergent Garden puts out very interesting videos on Life simulations, neural networks, cellular automata, and other emergent programs.

insight • 1 month ago • Via Artificial Intelligence Made Simple • www.youtube.com

Reading Recommendations. Devansh plans to share AI Papers/Publications, interesting books, videos, etc., each week.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Community Engagement. Devansh encourages individuals doing interesting work to drop their introduction in the comments for potential spotlight features.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Airbnb Architecture Shift. In 2018, Airbnb began its migration to a service-oriented architecture due to challenges with maintaining their Ruby on Rails 'monorail'.

insight • 1 month ago • Via Artificial Intelligence Made Simple • www.infoq.com

Vocab Size Research. Research indicates that larger models deserve larger vocabularies, and increasing vocabulary size consistently improves downstream performance.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Supporting Independent Work. Devansh puts a lot of effort into creating work that is informative, useful, and independent from undue influence.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Content Focus. The focus will be on AI and Tech, but ideas might range from business, philosophy, ethics, and much more.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Confabulation Perspective. Hallucinations in large language models can be considered a potential resource instead of a categorically negative pitfall.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

GitHub CI/CD Insights. GitHub runs 15,000 CI jobs within an hour across 150,000 cores of compute.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Machine Learning Applications. Software engineers building applications using machine learning need to test models in real-world scenarios before choosing the best performing model.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

LLM Paper Notes. Jean David Ruvini posts his notes on LLM/NLP related papers every month, providing valuable insights.

insight • 1 month ago • Via Artificial Intelligence Made Simple • www.linkedin.com

Autonomous Driving Milestone. Stanford Engineering and Toyota Research Institute achieve a milestone in autonomous driving by creating the world’s first autonomous Tandem Drift team, using AI to direct two driverless cars to perform synchronized maneuvers.

data point • 1 month ago • Via Last Week in AI • engineering.stanford.edu

Concerns Over AI Alteration. Elon Musk shares deepfake video of Kamala Harris, potentially violating platform's policies against synthetic and manipulated media, sparking concerns about AI-altered content in the upcoming election.

insight • 1 month ago • Via Last Week in AI • www.theverge.com

AI Law in Europe. Europe enforces the world's first AI law, targeting US tech giants with regulations on AI development, deployment, and use.

data point • 1 month ago • Via Last Week in AI • www.vcpost.com

Meta's AI Studio Launch. Meta has launched a new tool called AI Studio, allowing users in the US to create AI versions of themselves on Instagram or the web.

data point • 1 month ago • Via Last Week in AI • www.theverge.com

Perplexity AI's Revenue Share. Perplexity AI plans to share advertising revenue with news publishers whose content is used by the bot, responding to accusations of plagiarism and unethical web scraping.

insight • 1 month ago • Via Last Week in AI •

Funding for Black Forest Labs. Black Forest Labs, a startup founded by the creators of Stable Diffusion, has launched FLUX.1, a new text-to-image model suite for the open-source artificial intelligence community and secured $31 million in seed funding.

data point • 1 month ago • Via Last Week in AI • venturebeat.com

Musk's Revived Lawsuit. Elon Musk has reinitiated a lawsuit against OpenAI, the creator of the AI chatbot ChatGPT, reigniting a longstanding dispute that originated from a power conflict within the San Francisco-based startup.

data point • 1 month ago • Via Last Week in AI • www.nytimes.com

Focus on AI Alignment. Schulman, who played a key role in creating the AI-powered chatbot platform ChatGPT and led OpenAI's alignment science efforts, stated his move was driven by a desire to focus more on AI alignment and hands-on technical work.

insight • 1 month ago • Via Last Week in AI •

OpenAI Departures. OpenAI co-founder John Schulman has left the company to join rival AI startup Anthropic, while OpenAI president and co-founder Greg Brockman is taking an extended leave until the end of the year.

data point • 1 month ago • Via Last Week in AI • techcrunch.com

.

• 1 month ago • Via Simon Willison on Mastodon •

Data Collection Scale. ChatGPT has gathered unprecedented amounts of personal data.

data point • 1 month ago • Via Gary Marcus on AI •

Personal Data Training. Sam Altman has acknowledged wanting to train on everyone's personal documents (Word files, email etc).

data point • 1 month ago • Via Gary Marcus on AI •

WorldCoin Connection. Sam founded WorldCoin, known for their eye-scanning orb.

data point • 1 month ago • Via Gary Marcus on AI •

Monetization Intent. Altman wants to know - and monetize - everything about you.

insight • 1 month ago • Via Gary Marcus on AI •

Investment in Hardware. OpenAI just put in money in a $60M fundraise with a Webcam company and is planning hardware joint venture with them.

data point • 1 month ago • Via Gary Marcus on AI • www.theinformation.com

Security Expertise. OpenAI recently put Paul Nakasone (ex NSA) on the board.

data point • 1 month ago • Via Gary Marcus on AI •

Future Prospects Doubted. Prospects don’t seem as strong as they once did.

insight • 1 month ago • Via Gary Marcus on AI •

Risk of WeWork Comparison. I said it before, and I will say it again: OpenAI could wind up being seen as the WeWork of AI.

insight • 1 month ago • Via Gary Marcus on AI •

Morale Issues Identified. The board, which basically said it couldn't trust Sam, may have had a point.

insight • 1 month ago • Via Gary Marcus on AI •

Key Staff Departures. Over the last several months they have lost Ilya Sutskever, a whole bunch of safety people, and (slightly earlier) Andrej Karpathy.

data point • 1 month ago • Via Gary Marcus on AI •

Continuous Monitoring. Gary Marcus has had his eye on OpenAI for a long time.

recommendation • 1 month ago • Via Gary Marcus on AI •

Image Link. OpenAI's challenges appear visually notable.

data point • 1 month ago • Via Gary Marcus on AI • substackcdn.com

Valuation Concerns. Will they earn enough to justify their $80B valuation?

insight • 1 month ago • Via Gary Marcus on AI •

Google Antitrust Case. Google lost its antitrust case; it could have implications for Google's storehouse of AI training data.

insight • 1 month ago • Via Gary Marcus on AI • x.com

Nvidia Stock Decline. Nvidia dropped 6%, 20% over the last month.

data point • 1 month ago • Via Gary Marcus on AI • garymarcus.substack.com

AGI Predictions. OpenAI tempered expectations for its next event, and said we wouldn't see GPT-5 then.

insight • 1 month ago • Via Gary Marcus on AI • garymarcus.substack.com

Elon Musk's Lawsuit. Elon sued OpenAI again; the most interesting thing is that the suit could force a discussion of what AGI means – in court.

insight • 1 month ago • Via Gary Marcus on AI • www.nytimes.com

Market Uncertainty. It is also not out of the question that today could end someday be seen as a turning point.

insight • 1 month ago • Via Gary Marcus on AI •

Election Misinformation. Five states suggested that Musk's AI chatbot has spread election misinformation.

insight • 1 month ago • Via Gary Marcus on AI • www.axios.com

AI in Mathematics. AI achieves silver-medal standard solving International Mathematical Olympiad problems.

data point • 1 month ago • Via Last Week in AI • deepmind.google

Strike Over AI. Video game performers will go on strike over artificial intelligence concerns.

data point • 1 month ago • Via Last Week in AI • apnews.com

Legislative Actions. Democratic senators seek to reverse Supreme Court ruling that restricts federal agency power.

insight • 1 month ago • Via Last Week in AI • www.nbcnews.com

Impact of AI on Jobs. As new tech threatens jobs, Silicon Valley promotes no-strings cash aid.

insight • 1 month ago • Via Last Week in AI • www.npr.org

AI Safety Concerns. Senators demand OpenAI detail efforts to make its AI safe.

insight • 1 month ago • Via Last Week in AI • www.washingtonpost.com

Cohere's Funding. AI startup Cohere raises US$500-million, valuing company at US$5.5-billion.

data point • 1 month ago • Via Last Week in AI • www.theglobeandmail.com

Meta's New AI Model. Meta releases open-source AI model it says rivals OpenAI, Google tech.

data point • 1 month ago • Via Last Week in AI • www.washingtonpost.com

Google's Gemini Model. Google gives free Gemini users access to its faster, lighter 1.5 Flash AI model.

data point • 1 month ago • Via Last Week in AI • www.engadget.com

OpenAI's SearchGPT. OpenAI announces SearchGPT, its AI-powered search engine.

data point • 1 month ago • Via Last Week in AI • www.theverge.com

Investor Enthusiasm Diminishing. Investors may well stop forking out money at the rates they have, enthusiasm may diminish, and a lot of people may lose their shirts.

insight • 1 month ago • Via Gary Marcus on AI •

Generative AI Limitations. There is just one thing: Generative AI, at least we know it now, doesn't actually work that well, and maybe never will.

insight • 1 month ago • Via Gary Marcus on AI •

AI Bubble Prediction. I just wrote a hard-hitting essay for WIRED predicting that the AI bubble will collapse in 2025 — and now I wish I hadn't.

insight • 1 month ago • Via Gary Marcus on AI •

Imminent Collapse. The collapse of the generative AI bubble – in a financial sense – appears imminent, likely before the end of the calendar year.

insight • 1 month ago • Via Gary Marcus on AI •

Strict Disbelief. I've always thought GenAI was overrated.

insight • 1 month ago • Via Gary Marcus on AI •

Consistent Predictions. In March of this year, I made a series of seven predictions about how this year would go. Every one of them has held firm, for every model produced by every developer ever since.

data point • 1 month ago • Via Gary Marcus on AI •

Warning About AI. Almost exactly a year ago, in August 2023, I was (AFAIK) the first person to warn that Generative AI could be a dud.

data point • 1 month ago • Via Gary Marcus on AI • garymarcus.substack.com

Historical Predictions. In December 2022, at the height of ChatGPT's popularity I made a series of seven predictions about GPT-4 and its limits, such as hallucinations and making stupid errors, in an essay called What to Expect When You Are Expecting GPT-4.

data point • 1 month ago • Via Gary Marcus on AI • open.substack.com

.

• 1 month ago • Via Simon Willison on Mastodon •

Median Split Insight. The key dividing line on the SAT math lies between those who understand fractions, and those who do not.

insight • 1 month ago • Via Gary Marcus on AI •

AGI Misconceptions. Realizing neural networks struggle with outliers makes AGI seem like sheer fantasy, as no general solution to the outlier problem exists yet.

insight • 1 month ago • Via Gary Marcus on AI •

Symbolic vs Neural Networks. Symbolic systems have always been good for outliers; neural networks have always struggled with them.

insight • 1 month ago • Via Gary Marcus on AI •

Generative AI Expectations. GenAI sucks at outliers; if things are far enough from the space of trained examples, the techniques will fail.

insight • 1 month ago • Via Gary Marcus on AI •

AI Industry Bubble. An entire industry has been built - and will collapse - because people aren’t getting it regarding the outlier problem.

insight • 1 month ago • Via Gary Marcus on AI •

Cognitive Sciences Respect. AI researchers should have more respect for the cognitive sciences to make better advancements.

recommendation • 1 month ago • Via Gary Marcus on AI •

Historical Context. Machine learning had trouble with outliers in the 1990s, and it still does.

data point • 1 month ago • Via Gary Marcus on AI •

Outlier Problem Noted. Handling outliers is still the Achilles’ Heel of neural networks; this has been a constant issue for over a quarter century.

data point • 1 month ago • Via Gary Marcus on AI •

Machine Learning Limitations. Current approaches to machine learning are lousy at outliers, which means they often say and do things that are absurd when encountering unusual circumstances.

insight • 1 month ago • Via Gary Marcus on AI •

Internalized Taskmaster. The internalized taskmaster becomes more insidious than any external authority, driving individuals to constantly strive for more.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Effects of Boredom. Han highlights that deep boredom can lead to mental relaxation, contrasting with the hectic pace of contemporary life.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Cultural Critique. While some critiques of Han's work resonate, there are also suggestions that engaging with craftsmanship can bring joy, countering the narrative of constant productivity.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Engagement with Philosophy. The article recommends exploring philosophical perspectives like those of Nietzsche and Kierkegaard alongside Han's analysis for a broader understanding of the issues at hand.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Limitations of Achievement. The achievement society leads to a distorted view of life, reducing relationships and experiences to mere metrics of success.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Burnout Society Overview. Byung-Chul Han describes how modern society primes us for burnout, reflecting on individual experiences in this context.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Self-Destructive Pressure. The achievement-subject experiences destructive self-reproach and auto-aggression, resulting in a mental war against themselves.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Importance of Idleness. Han emphasizes the need for idle work, where tasks are done without worrying about results, to regain the right to be 'Human Beings' instead of 'Human Doings'.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Impact of Positivity. In the achievement society, positivity becomes a dominant force, pushing individuals to be happier and more successful, leading to internalized pressure.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Achievement Society Dynamics. Society has transitioned from a Discipline-based model to an Achievement-based one, driven by internal pressures to succeed.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

.

• 1 month ago • Via Simon Willison on Mastodon •

Investor Concerns. Microsoft's Chief Financial Officer painted a picture of a much slower burn, alarming some investors.

insight • 1 month ago • Via Gary Marcus on AI •

GenAI Project Canceled. Another GenAI monetization scheme bites the dust.

insight • 1 month ago • Via Gary Marcus on AI •

Survey Findings. The Upwork survey highlighted during the week reflects shifting sentiments around Generative AI.

data point • 1 month ago • Via Gary Marcus on AI • www.upwork.com

Generative AI Decline. Generative AI might be a dud; I just didn't expect it to fade so fast.

insight • 1 month ago • Via Gary Marcus on AI • garymarcus.substack.com

Warning on Deep Learning. Gary Marcus has been warning that deep learning was oversold since November 2012. Looks like he was right.

insight • 1 month ago • Via Gary Marcus on AI • www.newyorker.com

Opportunity for Resources. The fact that the GenAI bubble is apparently bursting sooner than expected may soon free up resources for other approaches, e.g., into neurosymbolic AI.

insight • 1 month ago • Via Gary Marcus on AI • garymarcus.substack.com

Loss of Faith. The bubble has begun to burst. Users have lost faith, clients have lost faith, VC's have lost faith.

insight • 1 month ago • Via Gary Marcus on AI •

Canceled Deal Reported. Business Insider reported a canceled deal, exacerbating concerns for the sector.

insight • 1 month ago • Via Gary Marcus on AI • stocks.apple.com

Combat Information Overload. The best way to combat the information overload created by Deepfakes is to empower people to stand on their own, interact with the world, and take care of themselves.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Deepfake Risks Discussion. The discussions around the risks from Deepfakes are incomplete (or wrong) since they overexaggerate some risks while ignoring others.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Cognitive Overload. The most immediate and pervasive impact of deepfakes would be the cognitive overload and information fatigue they create.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Need for Educational Reform. The way we see Education needs a rework- the emphasis on Courses, books, and degrees creates learners who are too static and passive.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Education and Empowerment. The best regulation will, therefore, focus on equipping us with the skills needed to navigate this.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Age of Misinformation. We fail with Deepfakes because we fail with SoMe, resorting to ineffective cases for both- censorship and an abdication of personal responsibility.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Combatting Environmental Concerns. Investing in more energy-efficient hardware and software for deepfake creation can significantly reduce energy consumption and emissions.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Environmental Impact. The energy-intensive process of generating deepfakes will contribute to climate change.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Scams and Vulnerability. Deepfakes provide a new tool for scammers, especially in targeting emotionally vulnerable people.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Legal Complications. Deepfakes challenge the reliability of digital evidence in court, potentially slowing legal processes.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Labeling AI Content. I believe that heavily AI-generated content should be labeled, and people featured in AI Ads must have given explicit approval for their appearance.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Exploitation of Public Figures. Non-consensual use of deepfakes can dilute personal brands and harm fan relationships.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Political Misinformation. The real danger lies in the lack of media literacy and critical thinking skills, exacerbated by political polarization.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

.

• 1 month ago • Via Simon Willison on Mastodon •

YouTube Search Deal. Google has become the exclusive search engine capable of surfacing results from Reddit, one of the internet's most significant sources of user-generated content.

data point • 1 month ago • Via Last Week in AI • www.404media.co

FTC AI Investigation. FTC investigates how companies use AI to implement surveillance pricing based on consumer behavior and personal data, seeking information from eight major companies.

insight • 1 month ago • Via Last Week in AI • techcrunch.com

AI Scraping Backlash. AI companies are facing a growing backlash from website owners who are blocking their scraper bots, leading to concerns about the availability of data for AI training.

insight • 1 month ago • Via Last Week in AI • www.404media.co

Regulatory Pressure. Elon Musk's X platform is under pressure from data regulators after it emerged that users are consenting to their posts being used to build artificial intelligence systems via a default setting on the app.

insight • 1 month ago • Via Last Week in AI • amp-theguardian-com.cdn.ampproject.org

OpenAI Bankruptcy Risk. OpenAI faces potential bankruptcy with projected $5 billion losses due to high operational costs and insufficient revenue from its AI ventures.

insight • 1 month ago • Via Last Week in AI • www.windowscentral.com

AI Funding Surge. AI startups have raised $41.5 billion worldwide in the past five years, surpassing other industries and indicating a significant role for AI in the future development and modernization of various sectors.

data point • 1 month ago • Via Last Week in AI • www.trendingtopics.eu

Adobe Generative AI. Adobe introduces new generative AI features to Illustrator and Photoshop, including tools like Generative Shape Fill and Text to Pattern in Illustrator.

data point • 1 month ago • Via Last Week in AI • www.theverge.com

Mistral Large 2. Mistral AI has launched Mistral Large 2, a new generation of its flagship model, boasting 123 billion parameters and a 128k context window.

data point • 1 month ago • Via Last Week in AI • analyticsindiamag.com

SearchGPT Launch. OpenAI has announced its entry into the search market with SearchGPT, an AI-powered search engine that organizes and makes sense of search results rather than just providing a list of links.

data point • 1 month ago • Via Last Week in AI • www.theverge.com

Study Reference. Read Bjarnason's new essay here.

recommendation • 1 month ago • Via Gary Marcus on AI • www.baldurbjarnason.com

Organizational Expectations. Management's expectation that AI is a magic fix for the organizational catastrophe that is the mass layoff fad is often unfounded.

insight • 1 month ago • Via Gary Marcus on AI •

General Public Sentiment. Many coders and tech aficionados may love ChatGPT for work, but much of the outside world feels quite differently.

insight • 1 month ago • Via Gary Marcus on AI •

Unusual Study Results. It's quite unusual for a study like this on a new office tool to return such a resoundingly negative sentiment.

insight • 1 month ago • Via Gary Marcus on AI •

Negative AI Impact. Over three in four (77%) say AI tools have decreased their productivity and added to their workload in at least one way.

data point • 1 month ago • Via Gary Marcus on AI •

Productivity Concerns. Nearly half (47%) of workers using AI say they have no idea how to achieve the productivity gains their employers expect.

data point • 1 month ago • Via Gary Marcus on AI •

Confidence in AI. On balance, these systems simply cannot be counted on, which is a bit part of why Fortune 500 companies have lost confidence in LLMs, after the initial hype.

data point • 1 month ago • Via Gary Marcus on AI •

Neurosymbolic AI Potential. AlphaProof and AlphaGeometry are both along the lines of first that we discussed, using formal systems like Cyc to vet solutions produced by LLMs.

insight • 1 month ago • Via Gary Marcus on AI •

Generative AI Bubble. I fully expect that the generative AI bubble will begin to burst within the next 12 months, for many reasons.

insight • 1 month ago • Via Gary Marcus on AI •

Limitations of Generative AI. The biggest intrinsic failings of generative AI have to do with reliability, in a way that I believe can never be solved, given their inherent nature.

insight • 1 month ago • Via Gary Marcus on AI •

Frustration with LLMs. My strong intuition... is that LLMs are simply never going to work reliably, at least not in the general form that so many people last year seemed to be hoping.

insight • 1 month ago • Via Gary Marcus on AI •

Need for Hybrid Models. What I have advocated for, my entire career, is hybrid approaches, sometimes called neurosymbolic AI, because they combine the best of the currently popular neural network approach with the symbolic approach.

recommendation • 1 month ago • Via Gary Marcus on AI •

Progress by Google DeepMind. To do this GDM used not one but two separate systems, a new one called AlphaProof, focused on theorem proving, and an update (AlphaGeometry 2) to an older one focused on geometry.

data point • 1 month ago • Via Gary Marcus on AI • deepmind.google

Open Source Advancements. Mistral releases Codestral Mamba for faster, longer code generation.

data point • 1 month ago • Via Last Week in AI • venturebeat.com

AI Video Model. Haiper 1.5 is a new AI video generation model challenging Sora and Runway.

data point • 1 month ago • Via Last Week in AI • venturebeat.com

Policy Issues. The U.S. is considering 'draconian' sanctions against China's semiconductor industry.

insight • 1 month ago • Via Last Week in AI • www.tomshardware.com

Elon Musk's Supercomputer. Elon Musk is working on a giant xAI supercomputer in Memphis.

data point • 1 month ago • Via Last Week in AI • www.forbes.com

GPT-4o Mini Release. OpenAI's release of GPT-4o Mini is a small AI model powering ChatGPT.

data point • 1 month ago • Via Last Week in AI • techcrunch.com

Internal Controversies. Whistleblowers say OpenAI illegally barred staff from airing safety risks.

insight • 1 month ago • Via Last Week in AI • www.washingtonpost.com

.

• 1 month ago • Via Simon Willison on Mastodon •

.

• 1 month ago • Via Simon Willison on Mastodon •

AI Health Uncut. Sergei Polevikov publishes super insightful and informative reports on AI, Healthcare, and Medicine as a business.

insight • 1 month ago • Via Artificial Intelligence Made Simple • sergeiai.substack.com

Dhabawala Case Study. Mumbai’s Dhabawala service presents an interesting case study of what is required to make food delivery profitable.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Importance of Stakeholder Alignment. The impact of getting stakeholder communication right vs wrong can be immense.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Philosophy of Love. Dostoevsky's ideas about love are hopeful, optimistic, demanding, and terrifying.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

AI's Investment Issues. Turns out a lot of the massive GPU purchase agreements and data center acquisitions were misguided and investing without a clear long-term vision and no understanding of revenue has lead to no ROI.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Semiconductor Industry Insights. The semiconductor capital equipment (semicap) industry is one of the most important industries on the planet and one that doesn’t get much love.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Nvidia's Value. In weeks leading up to Nvidia becoming the most valuable company in the world, I’ve received numerous requests for the updated math behind my analysis.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

LLM Evaluation Technique. We explore the use of state-of-the-art LLMs, such as GPT-4, as a surrogate for humans.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Human Evaluation Challenges. While human evaluation is the gold standard for assessing human preferences, it is exceptionally slow and costly.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Future Articles. Deepfake Part 3. Exploring the true dangers of AI-generated misinformation.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Active Subreddits. We started an AI Made Simple Subreddit. Come join us over here.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple • www.reddit.com

Community Engagement. If you’re doing interesting work and would like to be featured in the spotlight section, just drop your introduction in the comments/by reaching out to me.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Reading Recommendations. I figured I’d start sharing whatever AI Papers/Publications, interesting books, videos, etc I came across each week.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Newsletter Reach. Help me democratize the most important ideas in AI Research and Engineering to over 100K readers weekly.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Research Focus. The goal is to share interesting content with y'all so that you can get a peek behind the scenes into my research process.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Profit Predictions. That’s not great news for OpenAI, and you can see why they haven’t been, um, Open, about their financials.

insight • 1 month ago • Via Gary Marcus on AI •

Potential Fatal Questions. All of these questions are hard, with no obvious answer; the last may be fatal.

insight • 1 month ago • Via Gary Marcus on AI •

OpenAI's Financial Issues. I have long suspected that OpenAI was losing money, and lots of it, but never seen an analysis, until this morning.

insight • 1 month ago • Via Gary Marcus on AI • www.theinformation.com

Accurate Predictions. Gary Marcus’s predictions over the last couple years have been astonishingly on target.

insight • 1 month ago • Via Gary Marcus on AI •

Investor Questions. But investors really ought to ask some tough questions, such as these: What is their moat?

recommendation • 1 month ago • Via Gary Marcus on AI •

Cash Raising Necessity. Obviously, their only hope is to raise more cash, and they will certainly try.

insight • 1 month ago • Via Gary Marcus on AI •

LLMs as Commodities. LLMs have just became exactly the commodity I predicted they would become, at the lowest possible price.

insight • 1 month ago • Via Gary Marcus on AI •

MetaAI Competition. Yesterday was something even more dramatic: MetaAI all but pulled the rug out from OpenAI's business, offering a viable competitor to GPT-4 for free.

insight • 1 month ago • Via Gary Marcus on AI •

Lack of Competitive Moat. OpenAI, as far as I can tell, doesn’t really have any moat whatsoever, beyond brand recognition.

insight • 1 month ago • Via Gary Marcus on AI •

.

• 1 month ago • Via Simon Willison on Mastodon •

Hugging Face SmoLLM. Hugging Face has introduced SmoLLM, a new series of compact language models available in three sizes: 130M, 350M, and 1.7B parameters.

data point • 1 month ago • Via Last Week in AI • analyticsindiamag.com

Market Demand for Small Models. The trend toward small language models is accelerating as Arcee AI announced its $24M Series A funding only 6 months after a $5.5M seed round in January 2024.

insight • 1 month ago • Via Last Week in AI • venturebeat.com

OpenAI Reasoning Project. OpenAI is developing a new reasoning technology called Project Strawberry, which aims to enable AI models to conduct autonomous research and improve their ability to answer difficult user queries.

data point • 1 month ago • Via Last Week in AI • techreport.com

AI Security Standards. Top tech companies form a coalition to develop cybersecurity and safety standards for AI, aiming to ensure rigorous security practices and keep malicious hackers at bay.

recommendation • 1 month ago • Via Last Week in AI • www.axios.com

AI Training Data Ethics. A massive dataset containing subtitles from over 170,000 YouTube videos was used to train AI systems for major tech companies without permission, raising significant ethical and legal questions.

insight • 1 month ago • Via Last Week in AI • www.proofnews.org

Llama 3.1 Parameters. With 405 billion parameters, Llama 3.1 was developed using over 16,000 Nvidia H100 GPUs, costing Meta hundreds of millions of dollars.

data point • 1 month ago • Via Last Week in AI •

Meta Llama 3.1 Release. Meta has released Llama 3.1, the largest open-source AI model, claiming it outperforms top private models like GPT-4o and Claude 3.5 Sonnet.

data point • 1 month ago • Via Last Week in AI • www.theverge.com

GPT-4o Mini Performance. GPT-4o mini scored 82% on the MMLU reasoning benchmark and 87% on the MGSM math reasoning benchmark, outperforming other models like Gemini 1.5 Flash and Claude 3 Haiku.

data point • 1 month ago • Via Last Week in AI •

GPT-4o Mini Launch. OpenAI has launched GPT-4o mini, a smaller, faster, and more cost-effective AI model than its predecessors.

data point • 1 month ago • Via Last Week in AI • techcrunch.com

Data Augmentation Strategy. We will use a policy like TrivialAugment + StyleTransfer, for it's superior performance, cost, and benefits.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Effective Feature Extraction. Feature extraction is the highest ROI decision you can make.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Record Accuracy Achieved. On the widely used Labeled Faces in the Wild (LFW) dataset, our system achieves a new record accuracy of 99.63%.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Deepfake Detection System. We hope to build a Deepfake Detection system that can classify between 3 types of inputs: real, deep-fake, and ai-generated.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Self-Supervised Learning Application. Self-supervised clustering is elite for selecting the right samples to train on, helping to overcome scaling limits.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Audience Engagement Strategy. Every share puts me in front of a new audience, and I rely entirely on word-of-mouth endorsements to grow.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Model Performance Improvement. Our method uses a deep convolutional network trained to directly optimize the embedding itself, achieving state-of-the-art face recognition performance using only 128-bytes per face.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Sample Selection for Retraining. It’s best to add train samples based on maximizing information gain instead of simply adding more random ones.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Temporal Feature Analysis. If you want to take things up a notch, you’re best served going for temporal feature extraction.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Importance of Ensemble Modeling. Using simple models will keep inference costs low and allows an ensemble to compensate for the weakness of one model by sampling a more diverse search space.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

.

• 1 month ago • Via Simon Willison on Mastodon •

.

• 1 month ago • Via Simon Willison on Mastodon •

.

• 1 month ago • Via Simon Willison on Mastodon •

.

• 1 month ago • Via Simon Willison on Mastodon •

Vibe-Eval Suite. Reka AI introduces Vibe-Eval, a new evaluation suite designed to measure the progress of multimodal language models.

data point • 1 month ago • Via Last Week in AI • www.reka.ai

Global AI Regulation. Japan's Prime Minister Fumio Kishida unveils an international framework for the regulation and use of generative AI, emphasizing the need to address the potential risks and promote cooperation for safe and trustworthy AI.

data point • 1 month ago • Via Last Week in AI • apnews.com

AI in Healthcare. AI system trained on heart's electrical activity reduces deaths in high-risk patients by 31% in hospital trial, proving its potential to save lives.

data point • 1 month ago • Via Last Week in AI • www.newscientist.com

AI Notetaking Revolution. 'I will never go back': Ontario family doctor says new AI notetaking saved her job.

data point • 1 month ago • Via Last Week in AI • globalnews.ca

Shift to Enterprise Focus. AI startups that initially garnered attention with innovative generative AI products are now shifting their focus towards enterprise customers to enhance revenue streams.

insight • 1 month ago • Via Last Week in AI •

Meta's Ad Tool Issues. Meta's automated ad tool, Advantage Plus, has been overspending on ad budgets and failing to deliver sales, causing frustration among marketers and businesses.

insight • 1 month ago • Via Last Week in AI • www.theverge.com

Lawsuit Against OpenAI. Eight U.S. newspaper publishers, all under the ownership of investment firm Alden Global Capital, have filed a lawsuit against Microsoft and OpenAI, alleging copyright infringement.

data point • 1 month ago • Via Last Week in AI • www.cnbc.com

Inverse Scaling Phenomenon. The authors also share their findings on the difficulty of creating and evaluating hard prompts, and the phenomenon of inverse scaling, where larger models fail tasks that smaller models can complete.

insight • 1 month ago • Via Last Week in AI •

Burnout in AI Industry. AI engineers in the tech industry are experiencing burnout and rushed rollouts due to the intense competition and pressure to stay ahead in the generative AI race.

insight • 1 month ago • Via Last Week in AI • www.cnbc.com

Microsoft's AI Policy Change. Microsoft bans U.S. police from using enterprise AI tool for facial recognition due to concerns about potential pitfalls and racial biases.

data point • 1 month ago • Via Last Week in AI • techcrunch.com

Evaluation Challenges. The authors discuss the challenges of creating hard prompts and the trade-offs between human and model-based automatic evaluation.

insight • 1 month ago • Via Last Week in AI •

New AI Model. New Microsoft AI model may challenge GPT-4 and Google Gemini.

data point • 1 month ago • Via Last Week in AI • arstechnica.com

Mystery Chatbot. Mysterious 'gpt2-chatbot' AI model appears suddenly, confuses experts.

data point • 1 month ago • Via Last Week in AI • arstechnica.com

AI Music Generation. ElevenLabs previews music-generating AI model.

data point • 1 month ago • Via Last Week in AI • venturebeat.com

AI Content Labeling. TikTok will automatically label AI-generated content created on platforms like DALL·E 3.

data point • 1 month ago • Via Last Week in AI • techcrunch.com

AI Audiobooks. Audible's Test of AI-Voiced Audiobooks Tops 40,000 Titles.

data point • 1 month ago • Via Last Week in AI • www.bloomberg.com

Deepfake Detector Release. OpenAI Releases 'Deepfake' Detector to Disinformation Researchers.

data point • 1 month ago • Via Last Week in AI • www.nytimes.com

AI Export Bill. US lawmakers unveil bill to make it easier to restrict exports of AI models.

data point • 1 month ago • Via Last Week in AI • www.reuters.com

OpenAI & Stack Overflow. OpenAI and Stack Overflow partner to bring more technical knowledge into ChatGPT.

data point • 1 month ago • Via Last Week in AI • www.theverge.com

Robotaxi Plans Delayed. Motional delays commercial robotaxi plans amid restructuring.

data point • 1 month ago • Via Last Week in AI • techcrunch.com

Funding for Autonomy. Wayve, an A.I. Start-Up for Autonomous Driving, Raises $1 Billion.

data point • 1 month ago • Via Last Week in AI • www.nytimes.com

Siri Revamp. Apple Will Revamp Siri to Catch Up to Its Chatbot Competitors.

data point • 1 month ago • Via Last Week in AI • www.nytimes.com

Advancements in Drug Discovery. AlphaFold 3 is expected to be particularly beneficial for drug discovery, as it can predict where a drug binds a protein, a feature that was absent in its predecessor, AlphaFold 2.

insight • 1 month ago • Via Last Week in AI •

TikTok AI Labeling. TikTok has announced that it will automatically label AI-generated content created on other platforms, such as OpenAI's DALL·E 3, using a technology called Content Credentials from the Coalition for Content Provenance and Authenticity (C2PA).

data point • 1 month ago • Via Last Week in AI • techcrunch.com

DeepSeek-V2 Features. DeepSeek AI releases DeepSeek-V2, a Mixture-of-Experts (MoE) language model, that is state-of-the-art, cost-effective, and efficient with 236B total parameters, of which 21B are activated for each token.

data point • 1 month ago • Via Last Week in AI •

Robot Dogs Testing. The United States Marine Forces Special Operations Command (MARSOC) is testing rifle-armed 'robot dogs' supplied by Onyx Industries.

data point • 1 month ago • Via Last Week in AI • www.twz.com

Microsoft Copilot Upgrade. Microsoft is introducing new AI features in Copilot for Microsoft 365 to help users create better prompts and become prompt engineers, aiming to improve productivity and efficiency in the workplace.

recommendation • 1 month ago • Via Last Week in AI • www.theverge.com

Wayve's $1 Billion Raise. Wayve, a London-based AI start-up for autonomous driving, raised an eye-popping $1 billion from investors like SoftBank, Microsoft, and Nvidia.

data point • 1 month ago • Via Last Week in AI • www.nytimes.com

AI and Deception. AI systems are becoming increasingly sophisticated in their capacity for deception, raising concerns about potential dangers to society and the need for AI safety laws.

insight • 1 month ago • Via Last Week in AI • www.theguardian.com

Safety Tool Release. U.K. Safety Institute releases an open-source toolset called Inspect to assess AI model safety, aiming to provide a shared, accessible approach to evaluations.

data point • 1 month ago • Via Last Week in AI • techcrunch.com

AI Deepfake Detector. OpenAI releases a deepfake detector tool to combat the influence of AI-generated content on the upcoming elections, acknowledging that it's just the beginning of the fight against deepfakes.

recommendation • 1 month ago • Via Last Week in AI • www.nytimes.com

AlphaFold 3 Overview. Google's DeepMind has unveiled AlphaFold 3, an advanced version of its protein structure prediction tool, which can now predict the structures of DNA, RNA, and essential drug discovery molecules like ligands.

data point • 1 month ago • Via Last Week in AI • www.technologyreview.com

AI Model Competition. Microsoft is developing a new large-scale AI language model called MAI-1, potentially rivaling state-of-the-art models from Google, Anthropic, and OpenAI.

insight • 1 month ago • Via Last Week in AI • arstechnica.com

YouTube Version. You can watch the youtube version of this here:

data point • 1 month ago • Via Last Week in AI •

AI News Summary. Our 167th episode with a summary and discussion of last week's big AI news!

data point • 1 month ago • Via Last Week in AI •

Guest Host. With guest host Daliana Liu from The Data Scientist Show!

data point • 1 month ago • Via Last Week in AI • www.linkedin.com

Special Interview. With a special one-time interview with Andrey in the latter part of the podcast.

data point • 1 month ago • Via Last Week in AI •

Listener Interaction. Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai.

recommendation • 1 month ago • Via Last Week in AI •

OpenAI GPT-4o. OpenAI releases GPT-4o, a faster model that's free for all ChatGPT users.

insight • 1 month ago • Via Last Week in AI • www.theverge.com

Google AI Astra. Project Astra is the future of AI at Google.

insight • 1 month ago • Via Last Week in AI • www.theverge.com

AI in Search. Google is redesigning its search engine — and it's AI all the way down.

insight • 1 month ago • Via Last Week in AI • www.theverge.com

Google Media Models. Google unveils Veo and Imagen 3, its latest AI media creation models.

insight • 1 month ago • Via Last Week in AI • www.engadget.com

AI Music Sandbox. Google Unveils Music AI Sandbox Making Loops From Prompts.

insight • 1 month ago • Via Last Week in AI • www.cnet.com

Anthropic AI Tool. Anthropic AI Launches a Prompt Engineering Tool that Generates Production-Ready Prompts in the Anthropic Console.

insight • 1 month ago • Via Last Week in AI • www.marktechpost.com

OpenAI Leadership Change. OpenAI's Chief Scientist and Co-Founder Is Leaving the Company.

insight • 1 month ago • Via Last Week in AI • www.nytimes.com

Anthropic Leadership. Mike Krieger joins Anthropic as Chief Product Officer.

insight • 1 month ago • Via Last Week in AI • www.anthropic.com

Robotaxi Testing. GM's Cruise to start testing robotaxis in Phoenix area with human safety drivers on board.

insight • 1 month ago • Via Last Week in AI • abcnews.go.com

Zoox Probe. US agency probes Amazon-owned Zoox self-driving vehicles after two crashes.

insight • 1 month ago • Via Last Week in AI • www.reuters.com

Waymo Investigation. Waymo's robotaxis under investigation after crashes and traffic mishaps.

insight • 1 month ago • Via Last Week in AI • techcrunch.com

New AI Models. Falcon 2: UAE's Technology Innovation Institute Releases New AI Model Series, Outperforming Meta's New Llama 3.

insight • 1 month ago • Via Last Week in AI • www.businesswire.com

AI Model Safety. U.K. agency releases tools to test AI model safety.

insight • 1 month ago • Via Last Week in AI • techcrunch.com

AI Watermark. Google's invisible AI watermark will help identify generative text and video.

insight • 1 month ago • Via Last Week in AI • www.theverge.com

AI Copyright Issues. How One Author Pushed the Limits of AI Copyright.

insight • 1 month ago • Via Last Week in AI • www.wired.com

Project Astra Launch. Google's Project Astra, a real-time, multimodal AI assistant, is the future of AI at Google, according to Demis Hassabis, the head of Google DeepMind.

data point • 1 month ago • Via Last Week in AI • www.theverge.com

AI Legislation in Colorado. Colorado lawmakers have passed a landmark AI discrimination bill, which would prohibit employers from using AI to discriminate against workers.

data point • 1 month ago • Via Last Week in AI • www.jdsupra.com

AI in Journalism. Gannett is implementing AI-generated bullet points at the top of journalists' stories to enhance the reporting process.

data point • 1 month ago • Via Last Week in AI • www.theverge.com

AI Emissions Concerns. Microsoft's emissions and water usage spiked due to the increased demand for AI technologies, posing challenges to meeting sustainability goals.

insight • 1 month ago • Via Last Week in AI • www.pcmag.com

Investment in AI. Microsoft announces a 4 billion euro investment in cloud and AI infrastructure, AI skilling, and French Tech acceleration.

data point • 1 month ago • Via Last Week in AI • news.microsoft.com

AI College Partnership. Reddit's partnership with OpenAI allows the AI company to train its models on Reddit content, leading to a surge in Reddit shares.

insight • 1 month ago • Via Last Week in AI •

Waymo Investigation. The National Highway Traffic Safety Administration (NHTSA) has initiated an investigation into Alphabet's Waymo self-driving vehicles following reports of unexpected behavior and traffic safety violations.

data point • 1 month ago • Via Last Week in AI • www.theverge.com

Transparency Issues. This news came amidst the release of ChatGPT 4o, but OpenAI's restrictive off-boarding agreement has raised concerns about the company's transparency.

insight • 1 month ago • Via Last Week in AI •

Multimodal Capabilities. The new model is 'natively multimodal,' meaning it can generate content or understand commands in voice, text, or images.

insight • 1 month ago • Via Last Week in AI •

OpenAI's GPT-4o Release. OpenAI has announced the release of GPT-4o, an enhanced version of the GPT-4 model that powers ChatGPT.

data point • 1 month ago • Via Last Week in AI • www.theverge.com

Astra's Functionality. Hassabis envisions AI's future to be less about the models and more about their functionality, with AI agents performing tasks on behalf of users.

insight • 1 month ago • Via Last Week in AI •

Fetch AI Assistant. Microsoft, Khan Academy provide free AI assistant for all educators in US.

data point • 1 month ago • Via Last Week in AI • www.cnbc.com

AI Regulation Bill. Colorado governor signs sweeping AI regulation bill.

data point • 1 month ago • Via Last Week in AI • thehill.com

AI Likeness Management. Hollywood agency CAA aims to help stars manage their own AI likenesses.

data point • 1 month ago • Via Last Week in AI • techcrunch.com

AI Safety Commitments. Tech giants pledge AI safety commitments — including a ‘kill switch’ if they can't mitigate risks.

data point • 1 month ago • Via Last Week in AI • www.cnbc.com

Groundbreaking AI Law. World's first major law for artificial intelligence gets final EU green light.

data point • 1 month ago • Via Last Week in AI • www.cnbc.com

Emotional AI Initiative. Inflection AI reveals new team and plan to embed emotional AI in business bots.

data point • 1 month ago • Via Last Week in AI • venturebeat.com

AI Voice Concerns. OpenAI says Sky voice in ChatGPT will be paused after concerns it sounds too much like Scarlett Johansson.

data point • 1 month ago • Via Last Week in AI • www.tomsguide.com

AI and Education. AI tutors are quietly changing how kids in the US study, offering affordable and personalized assistance for school assignments.

insight • 1 month ago • Via Last Week in AI • techcrunch.com

First AI Regulation. EU member states have approved the world's first major law for regulating artificial intelligence, emphasizing trust, transparency, and accountability.

data point • 1 month ago • Via Last Week in AI • www.cnbc.com

Universal Basic Income. AI 'godfather' Geoffrey Hinton advocates for universal basic income to address AI's impact on job inequality and wealth distribution.

recommendation • 1 month ago • Via Last Week in AI • www.bbc.com

AI-Language Model War. Tencent and iFlytek have entered a price war by slashing prices of large-language models used for chatbots.

insight • 1 month ago • Via Last Week in AI • sg.news.yahoo.com

Generative AI Upgrade. Amazon is upgrading its decade-old Alexa voice assistant with generative artificial intelligence and plans to charge a monthly subscription fee.

insight • 1 month ago • Via Last Week in AI • www.cnbc.com

OpenAI's Response. OpenAI has temporarily halted the use of the Sky voice in its ChatGPT application due to its resemblance to actress Scarlett Johansson's voice.

insight • 1 month ago • Via Last Week in AI • www.tomsguide.com

Claude's Discoveries. One notable discovery was a feature associated with the Golden Gate Bridge, which, when activated, indicated that Claude was contemplating the landmark.

insight • 1 month ago • Via Last Week in AI •

Anthropic Research. A new research paper published by Anthropic aims to demystify the 'black box' phenomenon of AI's algorithmic behavior.

insight • 1 month ago • Via Last Week in AI • www.theverge.com

AI Launch Issues. This incident continues a trend of Google facing issues with its latest AI features immediately after their launch, as seen in February 2023.

insight • 1 month ago • Via Last Week in AI •

Trust Undermined. This has led to a significant backlash online, undermining trust in Google's search engine, which is used by over two billion people for reliable information.

insight • 1 month ago • Via Last Week in AI •

Google's AI Errors. Google's recent unveiling of its new artificial intelligence (AI) capabilities for search has sparked controversy due to a series of errors and untruths.

insight • 1 month ago • Via Last Week in AI • www.nytimes.com

AI News Summary. Our 169th episode with a summary and discussion of last week's big AI news!

insight • 1 month ago • Via Last Week in AI •

Hollywood AI Partnerships. Alphabet, Meta Offer Millions to Partner With Hollywood on AI.

data point • 1 month ago • Via Last Week in AI • www.bloomberg.com

AI Cloning Fines. Robocaller Who Used AI to Clone Biden's Voice Fined $6 Million.

data point • 1 month ago • Via Last Week in AI • www.theaiwired.com

AI Safety Concerns. OpenAI researcher who resigned over safety concerns joins Anthropic.

data point • 1 month ago • Via Last Week in AI • www.theverge.com

Training Compute Growth. Training Compute of Frontier AI Models Grows by 4-5x per Year.

data point • 1 month ago • Via Last Week in AI • epochai.org

AI Model Rankings. Scale AI publishes its first LLM Leaderboards, ranking AI model performance in specific domains.

data point • 1 month ago • Via Last Week in AI • siliconangle.com

xAI Funding. Elon Musk's xAI raises $6 billion in latest funding round.

data point • 1 month ago • Via Last Week in AI • www.forbes.com.au

Nvidia Revenue Surge. Nvidia, Powered by A.I. Boom, Reports Soaring Revenue and Profits.

data point • 1 month ago • Via Last Week in AI • www.nytimes.com

ChatGPT Discounts. OpenAI launches programs making ChatGPT cheaper for schools and nonprofits.

data point • 1 month ago • Via Last Week in AI • www.theverge.com

Content Deals with OpenAI. Vox Media and The Atlantic sign content deals with OpenAI.

data point • 1 month ago • Via Last Week in AI • www.theverge.com

PwC and OpenAI. PwC agrees deal to become OpenAI's first reseller and largest enterprise user.

data point • 1 month ago • Via Last Week in AI • www.cnbc.com

AI Earbuds Innovation. Iyo thinks its gen AI earbuds can succeed where Humane and Rabbit stumbled.

data point • 1 month ago • Via Last Week in AI • techcrunch.com

Real-time Video Translation. Microsoft Edge will translate and dub YouTube videos as you’re watching them.

data point • 1 month ago • Via Last Week in AI • www.theverge.com

Alexa's AI Overhaul. Amazon plans to give Alexa an AI overhaul — and a monthly subscription price.

data point • 1 month ago • Via Last Week in AI • www.cnbc.com

Opera's AI Integration. Opera is adding Google's Gemini AI to its browser.

data point • 1 month ago • Via Last Week in AI • www.engadget.com

Telegram Copilot Bot. Telegram gets an in-app Copilot bot.

data point • 1 month ago • Via Last Week in AI • www.theverge.com

Google AI Controversy. Google's A.I. Search Errors Cause a Furor Online.

data point • 1 month ago • Via Last Week in AI • www.nytimes.com

OpenAI Board Conflict. OpenAI is also embroiled in controversy, with former board member Helen Toner accusing CEO Sam Altman of dishonesty and manipulation during a failed coup attempt.

insight • 1 month ago • Via Last Week in AI •

Expensive AI Training Data. AI training data is becoming increasingly expensive, putting it out of reach for all but the wealthiest tech companies.

insight • 1 month ago • Via Last Week in AI •

Survey on AI Usage. AI products like ChatGPT are much hyped but not widely used, with only 2% of British respondents using such tools on a daily basis.

data point • 1 month ago • Via Last Week in AI •

AI Industry Tensions. The AI industry is seeing increasing tension, highlighted by a recent clash between Elon Musk and Yann LeCun on social media.

insight • 1 month ago • Via Last Week in AI •

EU AI Act Developments. The EU is establishing the AI Office to regulate AI risks, foster innovation, and influence global AI governance.

insight • 1 month ago • Via Last Week in AI •

Deepfake Concerns. A deepfake video of a U.S. official discussing Ukraine's potential strikes in Russia has surfaced, raising concerns about the use of AI-powered disinformation.

insight • 1 month ago • Via Last Week in AI •

AI Misuse in Influencing Campaigns. Russia and China used OpenAI's A.I. in covert campaigns to manipulate public opinion and influence geopolitics, raising concerns about the impact of generative A.I. on online disinformation.

insight • 1 month ago • Via Last Week in AI •

AI Search Tool Rollback. Google's new artificial intelligence feature for its search engine, A.I. Overviews, has been significantly rolled back after it produced a series of errors and false information.

insight • 1 month ago • Via Last Week in AI •

PwC as OpenAI Reseller. OpenAI has partnered with consulting giant PwC to provide ChatGPT Enterprise, the business-oriented version of its AI chatbot, to PwC employees and clients.

insight • 1 month ago • Via Last Week in AI •

Vox Media and OpenAI Partnership. Vox Media has announced a strategic partnership with OpenAI, aiming to leverage AI technology to enhance its content and product offerings.

insight • 1 month ago • Via Last Week in AI •

Musk's xAI Controversy. LeCun criticized Musk's leadership at xAI, calling him an erratic megalomaniac, following Musk's announcement of a $6 billion funding round for xAI.

insight • 1 month ago • Via Last Week in AI •

AGI by 2027. Former OpenAI researcher foresees AGI reality in 2027.

data point • 1 month ago • Via Last Week in AI • cointelegraph.com

AI Beauty Pageant. The Uncanny Rise of the World's First AI Beauty Pageant.

data point • 1 month ago • Via Last Week in AI • www.wired.com

GPT-4 Exam Performance. GPT-4 didn't ace the bar exam after all, MIT research suggests — it didn't even break the 70th percentile.

data point • 1 month ago • Via Last Week in AI • www.livescience.com

Election Risks. Testing and mitigating elections-related risks.

data point • 1 month ago • Via Last Week in AI • www.anthropic.com

OpenAI Whistleblowers. OpenAI Insiders Warn of a 'Reckless' Race for Dominance.

data point • 1 month ago • Via Last Week in AI • www.nytimes.com

Tech Giants Collaboration. Google, Intel, Microsoft, AMD and more team up to develop an interconnect standard to rival Nvidia's NVLink.

data point • 1 month ago • Via Last Week in AI • www.pcgamer.com

Microsoft Layoffs. Microsoft Lays Off 1,500 Workers, Blames 'AI Wave'.

data point • 1 month ago • Via Last Week in AI • futurism.com

Zoox Self-Driving Cars. Zoox to test self-driving cars in Austin and Miami.

data point • 1 month ago • Via Last Week in AI • techcrunch.com

UAE AI Partnership. UAE seeks 'marriage' with US over artificial intelligence deals.

data point • 1 month ago • Via Last Week in AI • www.ft.com

Saudi Investment. Saudi fund invests in China effort to create rival to OpenAI.

data point • 1 month ago • Via Last Week in AI • www.ft.com

OpenAI Robotics Group. OpenAI is restarting its robotics research group.

data point • 1 month ago • Via Last Week in AI • www.therobotreport.com

Google's NotebookLM. Google's updated AI-powered NotebookLM expands to India, UK and over 200 other countries.

data point • 1 month ago • Via Last Week in AI • techcrunch.com

ElevenLabs Sound Effects. ElevenLabs’ AI generator makes explosions or other sound effects with just a prompt.

data point • 1 month ago • Via Last Week in AI • www.theverge.com

Perplexity AI Feature. Perplexity AI's new feature will turn your searches into shareable pages.

data point • 1 month ago • Via Last Week in AI • techcrunch.com

Udio 130 Model. Udio introduces new udio-130 music generation model and more advanced features.

data point • 1 month ago • Via Last Week in AI • braintitan.medium.com

Apple's AI Features. 'Apple Intelligence' will automatically choose between on-device and cloud-powered AI.

data point • 1 month ago • Via Last Week in AI • www.theverge.com

AI Video Generator. KLING is the latest AI video generator that could rival OpenAI's Sora.

data point • 1 month ago • Via Last Week in AI • the-decoder.com

Right to Warn. Thirteen current and former employees of OpenAI and Google DeepMind have published a proposal demanding the right to warn the public about the potential dangers of advanced artificial intelligence (AI).

data point • 1 month ago • Via Last Week in AI • www.vox.com

Anticipating AGI. Former OpenAI researcher predicts the arrival of AGI by 2027, foreseeing AI machines surpassing human intelligence and national security implications.

insight • 1 month ago • Via Last Week in AI • cointelegraph.com

ChatGPT Outage. OpenAI's ChatGPT experienced multiple outages, including a major one during the daytime in the US, but the issues were eventually resolved.

data point • 1 month ago • Via Last Week in AI • www.theverge.com

Amazon AI Impact. Amazon's use of AI and robotics in its warehouses isolates workers and hinders union organizing, according to a new report by Oxford University researchers.

insight • 1 month ago • Via Last Week in AI • www.404media.co

FTC Antitrust Investigations. FTC and DOJ open antitrust investigations into Microsoft, OpenAI, and Nvidia, with the FTC looking into potential antitrust issues related to investments made by technology companies into smaller AI companies.

data point • 1 month ago • Via Last Week in AI • www.theverge.com

Microsoft's AI Investment. Microsoft plans to invest $3.2 billion in AI infrastructure in Sweden, including training 250,000 people and increasing capacity at its data centers.

data point • 1 month ago • Via Last Week in AI • finance.yahoo.com

AI Chatbot Accuracy. AI chatbots, including Google’s Gemini 1.0 Pro and OpenAI’s GPT-3, provided incorrect information 27% of the time when asked about voting and the 2024 election.

data point • 1 month ago • Via Last Week in AI • www.nbcnews.com

Kuaishou's New Product. Kuaishou, a Chinese short-video app, has launched a text-to-video service similar to OpenAI's Sora, as part of the race among Chinese Big Tech firms to catch up with US counterparts in AI applications.

insight • 1 month ago • Via Last Week in AI •

Concept Storage Method. A new research paper from OpenAI introduces a method to identify how the AI stores concepts that might cause misbehavior.

data point • 1 month ago • Via Last Week in AI • cdn.openai.com

Whistleblower Protections. The proposal also calls for the abolition of nondisparagement agreements that prevent insiders from voicing risk-related concerns.

insight • 1 month ago • Via Last Week in AI •

Conversations with Siri. Key features include a more conversational Siri, AI-generated 'Genmoji,' and integration with OpenAI's GPT-4o for handling complex requests.

insight • 1 month ago • Via Last Week in AI •

Deepfake Impact. AI played a significant role in the Indian election, with political parties using deepfakes and AI-generated content for targeted communication, translation of speeches, and personalized voter outreach.

insight • 1 month ago • Via Last Week in AI • theconversation.com

Regulatory Challenges. Waymo issues a voluntary software recall after a driverless vehicle collides with a telephone pole, prompting increased regulatory scrutiny of the autonomous vehicle industry.

insight • 1 month ago • Via Last Week in AI •

OpenAI Revenue Growth. OpenAI's annualized revenue has more than doubled in the last six months, reaching $3.4 billion.

data point • 1 month ago • Via Last Week in AI • www.pymnts.com

OpenAI Partnership. OpenAI and Apple announce partnership to integrate ChatGPT into Apple experiences.

data point • 1 month ago • Via Last Week in AI • openai.com

Generative Video Creation. Dream Machine enables users to create high-quality videos from simple text prompts such as 'a cute Dalmatian puppy running after a ball on the beach at sunset.'

insight • 1 month ago • Via Last Week in AI •

Luma AI Launch. Luma AI has launched the public beta of its new AI video generation model, Dream Machine, which has garnered overwhelming user interest.

data point • 1 month ago • Via Last Week in AI • siliconangle.com

Apple AI Features. Apple has announced 'Apple Intelligence,' a suite of AI features for iPhone, Mac, and more at WWDC 2024.

data point • 1 month ago • Via Last Week in AI • www.theverge.com

Perplexity Controversy. Buzzy AI Search Engine Perplexity Is Directly Ripping Off Content From News Outlets.

data point • 1 month ago • Via Last Week in AI • www.forbes.com

Huawei's Chip Concerns. Huawei exec concerned over China's inability to obtain 3.5nm chips, bemoans lack of advanced chipmaking tools.

data point • 1 month ago • Via Last Week in AI • www.tomshardware.com

Waymo's Recall. Waymo issues software and mapping recall after robotaxi crashes into a telephone pole.

data point • 1 month ago • Via Last Week in AI • www.theverge.com

Reward Tampering Research. Sycophancy to subterfuge: Investigating reward tampering in language models.

data point • 1 month ago • Via Last Week in AI • www.anthropic.com

Meta's AI Models. Meta releases flurry of new AI models for audio, text and watermarking.

data point • 1 month ago • Via Last Week in AI • venturebeat.com

Adept and Microsoft Deal. AI startup Adept is in deal talks with Microsoft.

data point • 1 month ago • Via Last Week in AI • fortune.com

OpenAI Revenue Growth. Report: OpenAI Doubled Annualized Revenue in 6 Months.

data point • 1 month ago • Via Last Week in AI • www.pymnts.com

Claude 3.5 Release. Anthropic just dropped Claude 3.5 Sonnet with better vision and a sense of humor.

data point • 1 month ago • Via Last Week in AI • www.tomsguide.com

Runway Video Model. Runway unveils new hyper realistic AI video model Gen-3 Alpha, capable of 10-second-long clips.

data point • 1 month ago • Via Last Week in AI • venturebeat.com

Luma's Dream Machine. 'We don’t need Sora anymore': Luma’s new AI video generator Dream Machine slammed with traffic after debut.

data point • 1 month ago • Via Last Week in AI • venturebeat.com

New Apple Features. Apple Intelligence: every new AI feature coming to the iPhone and Mac.

data point • 1 month ago • Via Last Week in AI • www.theverge.com

Emotion Detection Controversy. AI-powered cameras in UK train stations, including London's Euston and Waterloo, used Amazon software to scan faces and predict emotions, age, and gender for potential advertising and safety purposes, raising concerns about privacy and reliability.

concern • 1 month ago • Via Last Week in AI • www.wired.com

Claude 3.5 Sonnet Launch. Anthropic has launched its latest AI model, Claude 3.5 Sonnet, which it claims can match or surpass the performance of OpenAI’s GPT-4o or Google’s Gemini across a broad range of tasks.

data point • 1 month ago • Via Last Week in AI • www.theverge.com

AI Influencer Ads. AI-generated avatars are being introduced on TikTok for brands to use in ads, allowing for customization and dubbing in multiple languages.

data point • 1 month ago • Via Last Week in AI • www.nytimes.com

AI Models Comparison. Fireworks AI releases Firefunction-v2, an open-source function-calling model designed to excel in real-world applications, rivaling high-end models like GPT-4o at a fraction of the cost and with superior speed and functionality.

insight • 1 month ago • Via Last Week in AI • www.marktechpost.com

Brave AI Enhancement. Brave's in-browser AI assistant, Leo, now incorporates real-time Brave Search results, providing more accurate and up-to-date answers.

data point • 1 month ago • Via Last Week in AI • brave.com

Revenue Loss Estimate. The publishing industry is expected to lose over $10 billion due to such practices, according to Ameet Shah, partner and SVP of publisher operations and strategy at Prohaska Consulting.

data point • 1 month ago • Via Last Week in AI •

Publisher Backlash. AI search startup Perplexity, backed by Jeff Bezos and other tech giants, is facing backlash from publishers like The New York Times, The Guardian, Condé Nast, and Forbes for allegedly circumventing blocks to access and repurpose their content.

data point • 1 month ago • Via Last Week in AI • www.adweek.com

Benchmark Test Performance. Claude 3.5 Sonnet excelled in benchmark tests, outscoring GPT-4o, Gemini 1.5 Pro, and Meta's Llama 3 400B in most categories.

data point • 1 month ago • Via Last Week in AI •

AI-Generated Script Backlash. London premiere of AI-generated script film cancelled after backlash from audience and industry, highlighting ongoing debate over AI's role in the film industry.

concern • 1 month ago • Via Last Week in AI • www.theguardian.com

Speed Improvement. The new model, which is available to Claude users on the web and iOS, and to developers, is said to be twice as fast as its predecessor and outperforms the previous top model, 3 Opus.

data point • 1 month ago • Via Last Week in AI •

Gemini Side Panels. Google rolls out Gemini side panels for Gmail and other Workspace apps.

insight • 1 month ago • Via Last Week in AI • www.engadget.com

Voice Mode Delay. OpenAI delays rolling out its 'Voice Mode' to July.

insight • 1 month ago • Via Last Week in AI • www.channelnewsasia.com

AI News Summary. Our 172nd episode with a summary and discussion of last week's big AI news!

data point • 1 month ago • Via Last Week in AI •

Collaboration Tools. Anthropic Debuts Collaboration Tools for Claude AI Assistant.

insight • 1 month ago • Via Last Week in AI • www.pymnts.com

AI Music Lawsuits. Music labels sue AI music generators for copyright infringement.

insight • 1 month ago • Via Last Week in AI • arstechnica.com

AI Safety Bill. Y Combinator rallies start-ups against California's AI safety bill.

insight • 1 month ago • Via Last Week in AI • www.siliconrepublic.com

Stock Sale Policies. OpenAI walks back controversial stock sale policies, will treat current and former employees the same.

insight • 1 month ago • Via Last Week in AI • www.cnbc.com

Advanced AI Chip. China's ByteDance working with Broadcom to develop advanced AI chip, sources say.

insight • 1 month ago • Via Last Week in AI • theedgemalaysia.com

Figma AI Redesign. Figma announces big redesign with AI.

insight • 1 month ago • Via Last Week in AI • www.theverge.com

Waymo Robotaxis. Waymo ditches the waitlist and opens up its robotaxis to everyone in San Francisco.

insight • 1 month ago • Via Last Week in AI • www.theverge.com

ChatGPT for Mac. OpenAI's ChatGPT for Mac is now available to all users.

insight • 1 month ago • Via Last Week in AI • arstechnica.com

Ethical AI Positioning. Anthropic aims to enable beneficial uses of AI by government agencies, positioning itself as an ethical choice among rivals.

insight • 1 month ago • Via Last Week in AI • techcrunch.com

Gaming AI Capabilities. MIT robotics pioneer Rodney Brooks believes that people are overestimating the capabilities of generative AI and that it's flawed to assign human capabilities to it.

insight • 1 month ago • Via Last Week in AI • techcrunch.com

AI Scaling Myths. The belief that AI scaling will lead to artificial general intelligence is based on misconceptions about scaling laws, the availability of training data, and the limitations of synthetic data.

insight • 1 month ago • Via Last Week in AI • www.aisnakeoil.com

Formation Bio Investment. Formation Bio raises $372M in Series D funding to apply AI to drug development, aiming to streamline clinical trials and drug development processes.

data point • 1 month ago • Via Last Week in AI • techcrunch.com

Humanoid Robot Deployment. Agility Robotics' Digit humanoids have landed their first official job with GXO Logistics Inc., marking the industry's first formal commercial deployment of humanoids.

data point • 1 month ago • Via Last Week in AI • www.therobotreport.com

Google Translate Expansion. Google Translate has added 110 new languages, including Cantonese and Punjabi, bringing the total of supported languages to nearly 250.

data point • 1 month ago • Via Last Week in AI • lifehacker.com

AI Voice Imitations Controversy. Morgan Freeman expresses gratitude to fans for calling out unauthorized AI imitations of his voice, highlighting the growing issue of AI-generated voice imitations in the entertainment industry.

insight • 1 month ago • Via Last Week in AI • variety.com

New Collaboration Tools. Anthropic has launched an update to enhance team collaboration and productivity, introducing a Projects feature that allows users to organize their interactions with Claude.

data point • 1 month ago • Via Last Week in AI • www.pymnts.com

Kicking Off AI Usage. The company's expansion of its service to all San Francisco residents is seen as a crucial step towards the normalization of autonomous vehicles and a potential path to profitability for the historically money-losing operation.

insight • 1 month ago • Via Last Week in AI •

Waymo Expansion. Waymo announced that its robotaxi service in San Francisco is now open to the public, eliminating the need for customers to sign up for a waitlist.

data point • 1 month ago • Via Last Week in AI • www.theverge.com

AI Music Lawsuits. Universal Music Group, Sony Music, and Warner Records have filed lawsuits against AI music-synthesis companies Udio and Suno, accusing them of mass copyright infringement.

data point • 1 month ago • Via Last Week in AI • arstechnica.com

Performance Improvement. CriticGPT has shown significant effectiveness, with human reviewers using CriticGPT performing 60% better in evaluating ChatGPT's code outputs than those without such assistance.

data point • 1 month ago • Via Last Week in AI •

CriticGPT Introduction. OpenAI has introduced a new AI model, CriticGPT, designed to identify errors in the outputs of ChatGPT, an AI system built on the GPT-4 architecture.

data point • 1 month ago • Via Last Week in AI • www.marktechpost.com

China's AI Competition. The conversation includes China's competition in AI and its impacts.

insight • 1 month ago • Via Last Week in AI •

AI Features Discussion. The episode covers emerging AI features and legal disputes over data usage.

insight • 1 month ago • Via Last Week in AI •

Workforce Development. U.S. government addresses critical workforce shortages for the semiconductor industry with a new program.

recommendation • 1 month ago • Via Last Week in AI • www.tomshardware.com

Nvidia's Revenue. Nvidia is expected to make $12 billion from AI chips in China this year despite US controls.

data point • 1 month ago • Via Last Week in AI • www.ft.com

AI Regulation Issues. With Chevron's demise, AI regulation seems dead in the water.

insight • 1 month ago • Via Last Week in AI • techcrunch.com

AI Video Fund. Bridgewater starts a $2 billion fund that uses machine learning for decision-making.

data point • 1 month ago • Via Last Week in AI • fortune.com

Runway's Gen 3 Alpha. Runway's Gen-3 Alpha AI video model is now available, but there’s a catch.

data point • 1 month ago • Via Last Week in AI • venturebeat.com

LLaMA 3 Release. Meta is about to launch its biggest LLaMA model yet, highlighting its significance.

data point • 1 month ago • Via Last Week in AI • www.tomsguide.com

Gemini 1.5 Launch. Google's release of Gemini 1.5, Flash and Pro with 2M tokens to the public.

data point • 1 month ago • Via Last Week in AI • venturebeat.com

Apple's Board Role. Apple Inc. has secured an observer role on OpenAI's board, with Phil Schiller, Apple's App Store head and former marketing chief, appointed to the position.

data point • 1 month ago • Via Last Week in AI • www.bloomberg.com

Integrating ChatGPT. This move follows Apple's announcement to integrate ChatGPT into its iPhone, iPad, and Mac devices.

insight • 1 month ago • Via Last Week in AI •

AI Bias in Medical Imaging. AI models analyzing medical images can be biased, particularly against women and people of color, and while debiasing strategies can improve fairness, they may not generalize well to new patient populations.

recommendation • 1 month ago • Via Last Week in AI • medicalxpress.com

Democratizing AI Access. Mozilla's Llamafile and Builders Projects were showcased at the AI Engineer World's Fair, emphasizing democratized access to AI technology.

insight • 1 month ago • Via Last Week in AI • thenewstack.io

Mind-reading AI Progress. AI can accurately recreate what someone is looking at based on brain activity, greatly improved when the AI learns which parts of the brain to focus on.

insight • 1 month ago • Via Last Week in AI • www.newscientist.com

AI Model Evaluation Advocacy. Anthropic is advocating for third-party AI model evaluations to assess capabilities and risks, focusing on safety levels, advanced metrics, and efficient evaluation development.

insight • 1 month ago • Via Last Week in AI • www.enterpriseai.news

AI Coding Startup Valuation. AI coding startup Magic seeks $1.5-billion valuation in new funding round, aiming to develop AI models for writing software.

data point • 1 month ago • Via Last Week in AI • finance.yahoo.com

AI Music Generation. Suno launches iPhone app — now you can make AI music on the go, which allows users to generate full songs from text prompts or sound.

data point • 1 month ago • Via Last Week in AI • www.tomsguide.com

New AI Model Release. Kyutai has open-sourced Moshi, a real-time native multimodal foundation AI model that can listen and speak simultaneously.

data point • 1 month ago • Via Last Week in AI • www.marktechpost.com

Security Flaw Discovered. OpenAI's ChatGPT macOS app was found to be storing user conversations in plain text, making them easily accessible to potential malicious actors.

data point • 1 month ago • Via Last Week in AI • www.theverge.com

Concerns Over AI Safety. OpenAI is facing safety concerns from employees and external sources, raising worries about the potential impact on society.

insight • 1 month ago • Via Last Week in AI • www.theverge.com

AI Lawsuits Implications. AI music lawsuits could shape the future of the music industry, as major labels sue AI firms for alleged copyright infringement.

insight • 1 month ago • Via Last Week in AI • www.billboard.com

AI Video Model Development. Odyssey is developing an AI video model that can create Hollywood-grade visual effects and allow users to edit and control the output at a granular level.

data point • 1 month ago • Via Last Week in AI •

AI Health Coach Collaboration. OpenAI and Arianna Huffington are collaborating on an 'AI health coach' that aims to provide personalized health advice and guidance based on individual data.

insight • 1 month ago • Via Last Week in AI •

FlashAttention-3 Efficiency. The results show that FlashAttention-3 achieves a speedup on H100 GPUs by 1.5-2.0 times with FP16 reaching up to 740 TFLOPs/s and with FP8 reaching close to 1.2 PFLOPs/s.

data point • 1 month ago • Via Last Week in AI •

Antitrust Concerns. These changes occur amid growing antitrust concerns over Microsoft's partnership with OpenAI, with regulators in the UK and EU scrutinizing the deal.

insight • 1 month ago • Via Last Week in AI •

Regulatory Scrutiny Reaction. Microsoft has relinquished its observer seat on the board of OpenAI, a move that comes less than eight months after it secured the non-voting position.

data point • 1 month ago • Via Last Week in AI •

OpenAI Security Breach. In early 2022, a hacker infiltrated OpenAI's internal messaging systems, stealing information about the design of the company's AI technologies.

data point • 1 month ago • Via Last Week in AI •

Perception of Progress Assessment. Despite the introduction of this system, there is no consensus in the AI research community on how to measure progress towards AGI, and some view OpenAI's five-tier system as a tool to attract investors rather than a scientific measurement of progress.

insight • 1 month ago • Via Last Week in AI •

Advancements in AGI. OpenAI is reportedly close to reaching Level 2, or 'Reasoners,' which would be capable of basic problem-solving on par with a human with a doctorate degree.

data point • 1 month ago • Via Last Week in AI •

Current AI Level. OpenAI's technology, such as GPT-4o that powers ChatGPT, is currently at Level 1, which includes AI that can engage in conversational interactions.

data point • 1 month ago • Via Last Week in AI •

OpenAI's Five-Tier Model. OpenAI has introduced a five-tier system to track its progress towards developing artificial general intelligence (AGI).

data point • 1 month ago • Via Last Week in AI •

AI Industry Challenges. We delve into the latest advancements and challenges in the AI industry, highlighting new features from Figma and Quora, regulatory pressures on OpenAI, and significant investments in AI infrastructure.

insight • 1 month ago • Via Last Week in AI •

OpenAI and Health Coach. OpenAI and Arianna Huffington are working together on an 'AI health coach.'

data point • 1 month ago • Via Last Week in AI • www.theverge.com

Mind-Reading AI. Mind-reading AI recreates what you're looking at with amazing accuracy.

data point • 1 month ago • Via Last Week in AI • www.newscientist.com

New AI Features. Figma pauses its new AI feature after Apple controversy.

data point • 1 month ago • Via Last Week in AI • techcrunch.com

AI-generated Content Labels. Vimeo joins YouTube and TikTok in launching new AI content labels.

data point • 1 month ago • Via Last Week in AI • techcrunch.com

Content Regulation Pressure. There is a need for transparency and regulation in AI content labeling and licensing.

insight • 1 month ago • Via Last Week in AI •

AI Coding Startup. AI coding startup Magic seeks a $1.5-billion valuation in new funding round, sources say.

data point • 1 month ago • Via Last Week in AI • finance.yahoo.com

Elon Musk's GPU Plans. Elon Musk reveals plans to make the world's 'Most Powerful' 100,000 NVIDIA GPU AI cluster.

data point • 1 month ago • Via Last Week in AI • wccftech.com

AMD Acquisition News. AMD plans to acquire Silo AI in a $665 million deal.

data point • 1 month ago • Via Last Week in AI • finance.yahoo.com

Regurgitation Process. The regurgitative process need not be verbatim.

insight • 1 month ago • Via Gary Marcus on AI •

Neural Nets Critique. Gary Marcus criticizes neural nets, stating, 'Neural nets don't really understand anything, they read on the web.'

insight • 1 month ago • Via Gary Marcus on AI • garymarcus.substack.com

Need for New Approach. Getting to real AI will require a different approach.

recommendation • 1 month ago • Via Gary Marcus on AI •

Understanding Proof. Partial regurgitation, no matter how fluent, does not, and will not ever, constitute genuine comprehension.

insight • 1 month ago • Via Gary Marcus on AI •

AI's Limitations. LLMs are great at clustering similar things but 'regurgitating a lot of words with slight paraphrases while adding conceptually little, and understanding even less.'

insight • 1 month ago • Via Gary Marcus on AI •

Partial Regurgitation Defined. The term 'partial regurgitation' is introduced to describe AI's output not being a full reconstruction of the original source.

insight • 1 month ago • Via Gary Marcus on AI •

Storage of Weights. Neural nets do store weights, but that doesn't mean that they know what they are talking about.

insight • 1 month ago • Via Gary Marcus on AI •

Financial Priorities. Instead, they appear to be focused precisely on financial return, and appear almost indifferent to some the ways in which their product has already hurt large numbers of people (artists, writers, voiceover actors, etc).

insight • 1 month ago • Via Gary Marcus on AI •

OpenAI's Mission. As recently as November 2023, OpenAI promised in their filing as a nonprofit exempt from income tax to make AI that that 'benefits humanity … unconstrained by a need to generate financial return'.

data point • 1 month ago • Via Gary Marcus on AI •

Future of AI. Gary Marcus hopes that the most ethical company wins. And that we don’t leave our collective future entirely to self-regulation.

insight • 1 month ago • Via Gary Marcus on AI •

Ethical Concerns. The real issue isn’t whether OpenAI would win in court, it’s what happens to all of us, if a company with a track record for cutting ethical corners winds up first to AGI.

insight • 1 month ago • Via Gary Marcus on AI •

Comparison to DeepMind. By comparison, GoogleDeepMind devotes a lot of its energy towards projects like AlphaFold that have clear potential to help humanity.

insight • 1 month ago • Via Gary Marcus on AI •

Safety Resources. Furthermore, OpenAI apparently hasn’t even fulfilled their own promises to devote 20% resources to AI safety.

insight • 1 month ago • Via Gary Marcus on AI •

Product Focus. The first step towards that should be a question about product – are the products we are making benefiting humanity?

recommendation • 1 month ago • Via Gary Marcus on AI •

Copyright Issues. OpenAI has trained on a massive amount of copyrighted material, without consent, and in many instances without compensation.

insight • 1 month ago • Via Gary Marcus on AI •

Call for Independent Oversight. Without independent scientists in the loop, with a real voice, we are lost.

recommendation • 1 month ago • Via Gary Marcus on AI •

Questioning Government Trust. It's correct for the public to take everything OpenAI says with a grain of salt, especially because of their massive power and chance to potentially put humanity at risk.

insight • 1 month ago • Via Gary Marcus on AI •

Tax Status Conflict. OpenAI filed for non-profit tax exempt status, claiming that the company's mission was to 'safely benefit humanity', even as they turn over almost half their profits to Microsoft.

insight • 1 month ago • Via Gary Marcus on AI •

Governance Promises Broken. Altman once promised that outsiders would play an important role in the company's governance; that key promise has not been kept.

insight • 1 month ago • Via Gary Marcus on AI • www.newyorker.com

Restrictive Employee Contracts. OpenAI had highly unusual contractual 'clawback' clauses designed to keep employees from speaking out about any concerns about the company.

insight • 1 month ago • Via Gary Marcus on AI • www.vox.com

Unmet Safety Promises. OpenAI promised to devote 20% of its efforts to AI safety, but never delivered, according to a recent report.

insight • 1 month ago • Via Gary Marcus on AI • fortune.com

Altman's Conflicts of Interest. Altman appears to have misled people about his personal holdings in OpenAI, omitting potential conflicts of interest between his role as CEO of the nonprofit OpenAI and other companies he might do business with.

insight • 1 month ago • Via Gary Marcus on AI •

CTO's Miscommunication. CTO Mira Murati embarrassed herself and the company in her interview with Joanna Stern of the Wall Street Journal, sneakily conflating 'publicly available' with 'public domain'.

insight • 1 month ago • Via Gary Marcus on AI •

Misuse of Artist's Voice. OpenAI proceeded to make a Scarlett Johansson-like voice for GPT-4o, even after she specifically told them not to, highlighting their overall dismissive attitude towards artist consent.

insight • 1 month ago • Via Gary Marcus on AI • www.npr.org

OpenAI's Misleading Name. OpenAI called itself open, and traded on the notion of being open, but even as early as May 2016 knew that the name was misleading.

insight • 1 month ago • Via Gary Marcus on AI • substackcdn.com

Governance Representation. Sam Altman, 2016: 'We’re planning a way to allow wide swaths of the world to elect representatives to a new governance board.'

data point • 1 month ago • Via Gary Marcus on AI • www.newyorker.com

Accountability Reminder. Gary Marcus keeps receipts.

insight • 1 month ago • Via Gary Marcus on AI •

Questioning Authority. What happened to the wide swaths of the world? To quote Altman himself, 'Why do these fuckers get to decide what happens to me?'

insight • 1 month ago • Via Gary Marcus on AI •

.

• 1 month ago • Via Gary Marcus on AI •

Toner's Whistleblowing. Toner was pushed out for her sin of speaking up.

insight • 1 month ago • Via Gary Marcus on AI •

Firing Consideration. The board had contemplated firing Sam over trust issues before that.

insight • 1 month ago • Via Gary Marcus on AI •

ChatGPT Announcement. The board was not informed in advance about that [ChatGPT], we learned about ChatGPT on Twitter.

insight • 1 month ago • Via Gary Marcus on AI •

Safety Process Inaccuracy. Multiple occasions he gave inaccurate information about the small number of formal safety processes that the company did have in place.

insight • 1 month ago • Via Gary Marcus on AI •

Oversight Concerns. Altman is consolidating more and more power and seeming less and less on the level.

insight • 1 month ago • Via Gary Marcus on AI •

Sam's Deceit. Putting Toner's disclosures together with the other lies from OpenAI that I documented the other day, I think we can safely put Kara's picture of Sam the Innocent to bed.

insight • 1 month ago • Via Gary Marcus on AI •

Conflict of Interest. Sam has now divested his stake in that investment firm.

insight • 1 month ago • Via Gary Marcus on AI •

Trust Issues. If they can't trust Altman, I don't see they can do their job.

insight • 1 month ago • Via Gary Marcus on AI •

Nonprofit Status. If they cannot assemble a board that respects the legal filings they made, and cannot behave in keeping with their oft-repeated promises, they must dissolve the nonprofit.

recommendation • 1 month ago • Via Gary Marcus on AI • www.citizen.org

Lack of Candor. The (old) board never said that the firing of Sam was directly about safety, they said it was about candor.

insight • 1 month ago • Via Gary Marcus on AI •

Misleading Claims. Both read to me as deeply misleading, verging on defamatory.

insight • 1 month ago • Via Gary Marcus on AI •

Lack of Trust. The degree to which they diverted from that core issue that led to Sam's firing is genuinely disturbing.

insight • 1 month ago • Via Gary Marcus on AI •

ChatGPT Announcement. The board was not informed in advance about that. We learned about ChatGPT on Twitter.

data point • 1 month ago • Via Gary Marcus on AI •

Board Attacks. At least two proxies have gone after Helen Toner, one in The Economist, highbrow, one low (a post on X that got around 200,000 views).

data point • 1 month ago • Via Gary Marcus on AI • www.economist.com

Time's Ravages. What I said then to Bach still holds, 100%, 26 months later.

insight • 1 month ago • Via Gary Marcus on AI •

Longstanding Warnings. Gary Marcus has warned people about the limits of deep learning, including hallucinations, since 2001.

data point • 1 month ago • Via Gary Marcus on AI •

Musk's Shift. Musk has switched teams, flipping from calling for a pause to going all in on a technology that remains exactly as incorrigible as it ever was.

insight • 1 month ago • Via Gary Marcus on AI •

Alignment Problem. We are no closer to a solution to the alignment problem now than we were then.

insight • 1 month ago • Via Gary Marcus on AI •

Unmet Expectations. For all the daily claims of 'exponential progress', reliability is still a dream.

insight • 1 month ago • Via Gary Marcus on AI •

Deep Learning Critique. The ridicule started with my infamous 'Deep Learning is Hit a Wall' essay.

insight • 1 month ago • Via Gary Marcus on AI • garymarcus.substack.com

Financial Conflicts. The Wall Street Journal had a long discussion of Altman’s financial holdings and possible conflicts of interest.

data point • 1 month ago • Via Gary Marcus on AI • www.wsj.com

Musk-LeCun Tension. Yann LeCun just pushed Elon Musk to the point of unfollowing him.

insight • 1 month ago • Via Gary Marcus on AI •

Kara Swisher's Bias. Paris Marx echoed my own feelings about Kara Swisher’s apparent lack of objectivity around Altman.

insight • 1 month ago • Via Gary Marcus on AI • disconnect.blog

Slowing Innovation. Christoper Mims echoed a lot of what I have been arguing here largely, writing that 'The pace of innovation in AI is slowing, its usefulness is limited, and the cost of running it remains exorbitant.'

insight • 1 month ago • Via Gary Marcus on AI • www.wsj.com

No Breakthroughs. It has been almost two years since there’s been a bona fide GPT-4-sized breakthrough, despite the constant boasts of exponential progress.

insight • 1 month ago • Via Gary Marcus on AI •

Lackluster Fireside Chat. Melissa Heikkilä at Technology Review more or less panned Altman’s recent fireside chat at AI for Good.

data point • 1 month ago • Via Gary Marcus on AI • mailchi.mp

Bad Press for Altman. The bad press about Sam Altman and OpenAI, who once seemingly could do no wrong, just keeps coming.

insight • 1 month ago • Via Gary Marcus on AI •

Key Contributors. The letter itself, cosigned by Bengio, Hinton, and Russell.

data point • 1 month ago • Via Gary Marcus on AI • righttowarn.ai

Informed Endorsement. I fully endorse its four recommendations.

insight • 1 month ago • Via Gary Marcus on AI •

Gift Link Provided. Roose supplied a gift link.

data point • 1 month ago • Via Gary Marcus on AI • x.com

Common Sense Emphasis. Nowadays we both stress the absolutely essential nature of common sense, physical reasoning and world models, and the failure of current architectures to handle those well.

insight • 1 month ago • Via Gary Marcus on AI •

Future AI Development. If you want to argue that some future, as yet unknown form of deep learning will be better, fine, but with regards to what exists and is popular now, your view has come to mirror my own.

insight • 1 month ago • Via Gary Marcus on AI •

Critique Overlap. Your current critique for what is wrong with LLMs overlaps heavily with what I said repeatedly from 2018 to 2022.

insight • 1 month ago • Via Gary Marcus on AI •

Potential Alliance. The irony of all of this is that you and I are among the minority of people who have come to fully understand just how limited LLMs are, and what we need to do next. We should be allies.

recommendation • 1 month ago • Via Gary Marcus on AI •

Historical Dismissals. There is a clear pattern: you often initially dismiss my ideas, only to converge on the same place later — without ever citing my earlier arguments.

insight • 1 month ago • Via Gary Marcus on AI •

Funding Decline. Generative AI seed funding drops.

data point • 1 month ago • Via Gary Marcus on AI • pitchbook.com

Underprepared for AGI. We are woefully underprepared for AGI whenever it comes.

insight • 1 month ago • Via Gary Marcus on AI •

Read Marcus's Book. Gary Marcus wrote his new book Taming Silicon Valley in part for the reason of addressing regulatory issues.

recommendation • 1 month ago • Via Gary Marcus on AI • garymarcus.substack.com

Regulatory Failure. Self-regulation is a farce, and the US legislature has made almost no progress thus far.

insight • 1 month ago • Via Gary Marcus on AI •

Data Point Validity. Every data point there is imaginary; we aren’t plotting real things here.

insight • 1 month ago • Via Gary Marcus on AI •

Graph Issues. The double Y-axis makes no sense, and presupposes its own conclusion.

insight • 1 month ago • Via Gary Marcus on AI •

GPT-4 Comparisons. GPT-4 is not actually equivalent to a smart high schooler.

insight • 1 month ago • Via Gary Marcus on AI •

AGI Prediction. OpenAI's internal roadmap alleged that AGI would be achieved by 2027.

data point • 1 month ago • Via Gary Marcus on AI •

Proposed Bill SB-1047. State Senator Scott Wiener and others in California have proposed a bill, SB-1047, that would build in some modest restrains around AI.

data point • 1 month ago • Via Gary Marcus on AI • leginfo.legislature.ca.gov

Serious Damage Definition. Hazardous is defined here as half a billion dollars in damage; should we give that AI industry a free pass no matter how much harm might be done?

insight • 1 month ago • Via Gary Marcus on AI •

Regulation vs. Innovation. The Information's op-ed complains that 'California's effort to regulate AI would stifle innovation', but never really details how.

insight • 1 month ago • Via Gary Marcus on AI •

Demand for Stronger Regulation. We should be making SB-1047 stronger, not weaker.

recommendation • 1 month ago • Via Gary Marcus on AI •

Concern over Liability. Andrew Ng complains that the bill defines an unreasonable 'hazardous capability' designation that may make builders of large AI models liable if someone uses their models to do something that exceeds the bill's definition of harm.

insight • 1 month ago • Via Gary Marcus on AI •

Self-Regulation Skepticism. Big Tech's overwhelming message is 'Trust Us'. Should we?

insight • 1 month ago • Via Gary Marcus on AI •

Certification Requirements. Anyone training a 'covered AI model' must certify, under penalty of perjury, that their model will not be used to enable a 'hazardous capability' in the future.

data point • 1 month ago • Via Gary Marcus on AI •

Industry Pushback. Both the well-known deep-learning expert Andrew Ng and the industry newspaper The Information came out against 1047 in vigorous terms.

data point • 1 month ago • Via Gary Marcus on AI •

Regulatory Support Lack. Not one of the companies that previously stood up and said they support AI regulation is standing up for this one.

insight • 1 month ago • Via Gary Marcus on AI •

OpenAI's CTO Admission. OpenAI's CTO Mira Murati acknowledged that there is no mind blowing GPT-5 behind the scenes as of yet.

data point • 1 month ago • Via Gary Marcus on AI • x.com

Kurzweil's Prediction. Ray Kurzweil confirmed he has not revised and not redefined his prediction of AGI, still believing that will happen by 2029.

data point • 1 month ago • Via Gary Marcus on AI •

Future Expectations. Expect more revisionism and downsized expectations throughout 2024 and 2025.

recommendation • 1 month ago • Via Gary Marcus on AI •

Expectations for LLMs. The ludicrously high expectations from the last 18 ChatGPT-drenched months were never going to be met.

insight • 1 month ago • Via Gary Marcus on AI •

Kurzweil's New Projection. In an interview published in WIRED, Kurzweil let his predictions slip back, for the first time, to 2032.

data point • 1 month ago • Via Gary Marcus on AI • www.wired.com

Public Predictions. Nobody to my knowledge has kept systematic track of the predictions, but I took a quick and somewhat random look at X and had no trouble finding many predictions, going back to 2023, almost always optimistic.

data point • 1 month ago • Via Gary Marcus on AI •

Hallucination Concerns. Gary Marcus is still betting that GPT-5 will continue to hallucinate and make a bunch of wacky errors, whenever it finally drops.

insight • 1 month ago • Via Gary Marcus on AI •

Future Predictions Meme. Now arriving Gate 2024, Gate 2025, ... Gate 2026.

insight • 1 month ago • Via Gary Marcus on AI •

New Meme Observed. By now there’s actually a new meme in town. This one’s got even more views.

insight • 1 month ago • Via Gary Marcus on AI •

Confidence in Predictions. A lot of them got tons of views... What stands out the most, maybe, is the confidence with which a lot of them were presented.

insight • 1 month ago • Via Gary Marcus on AI •

GP-5 Training Status. Sam just a few weeks ago officially announced that they had only just started training GPT-5.

insight • 1 month ago • Via Gary Marcus on AI •

CTO Statement. Mira Murati promised we’d someday see 'PhD-level' models, the next big advance over today’s models, but not for another 18 months.

insight • 1 month ago • Via Gary Marcus on AI •

Delayed GPT-5 Arrival. Today is June 20 and I still don’t see squat. It would now appear that Business Insider’s sources were confused, or overstating what they knew.

insight • 1 month ago • Via Gary Marcus on AI •

AGI Prediction Clarification. Ray Kurzweil confirmed he has not revised and not redefined his prediction of AGI, still defined as AI that can perform any cognitive task an educated human can, and still believes that will happen by 2029.

insight • 1 month ago • Via Gary Marcus on AI •

Opposing Views on AGI. Gary Marcus stands by his own prediction that we will not see AGI by 2029, per criteria he discussed here.

insight • 1 month ago • Via Gary Marcus on AI • garymarcus.substack.com

Debate Potential. Ray Kurzweil and Gary Marcus talked about having a debate, which they hope will come to pass.

recommendation • 1 month ago • Via Gary Marcus on AI •

Interpretation Misunderstanding. Gary Marcus misunderstood Ray Kurzweil to be revising his prediction for AGI to a later year (perhaps 2032).

insight • 1 month ago • Via Gary Marcus on AI •

Reality Check Needed. We need a President who can sort truth from bullshit, in order to develop AI policies that are grounded in reality.

recommendation • 1 month ago • Via Gary Marcus on AI •

Corporate Promises. We need a President who can recognize when corporate leaders are promising things far beyond what is currently realistic.

recommendation • 1 month ago • Via Gary Marcus on AI •

Tech Hype Shift. The big tech companies are hyping AI with long term promises that are impossible to verify.

insight • 1 month ago • Via Gary Marcus on AI •

Presidential Understanding. We cannot afford to have a President in 2024 that doesn't fully grasp this.

recommendation • 1 month ago • Via Gary Marcus on AI •

Future AI Changes. AI is going to change everything, if not tomorrow, sometime over the next 5-20 years, some ways for good, some for bad.

insight • 1 month ago • Via Gary Marcus on AI •

Current AI Errors. Businesses are finally finding this out, too. (Headline in WSJ: 'AI Work Assistants Need a Lot of Handholding', because they are still riddled with errors.)

data point • 1 month ago • Via Gary Marcus on AI • www.wsj.com

AI Limitations. Generative AI does in fact (still) have enormous limitations, just as I anticipated.

data point • 1 month ago • Via Gary Marcus on AI •

Debate Performance. Former President (and convicted felon) Donald Trump lied like an LLM last night, but still won the debate, because Biden's delivery was so weak.

insight • 1 month ago • Via Gary Marcus on AI •

AI Ignored. Neither president even mentioned AI, which was a travesty of a different sort.

insight • 1 month ago • Via Gary Marcus on AI • www.nytimes.com

Starting Point. Gary Marcus thinks we have maybe one shot to get AI policy right in the US, and that we aren't off to a great start.

insight • 1 month ago • Via Gary Marcus on AI •

Understanding Science. Above all else, we need a President who understands and appreciates science.

recommendation • 1 month ago • Via Gary Marcus on AI •

Urgent AI Policies. We need a President who can get Congress to recognize the true urgency of the moment, since Executive Orders alone are not enough.

recommendation • 1 month ago • Via Gary Marcus on AI •

Call for Metacognition. Scaling is not the most interesting dimension; instead, we need techniques, such as metacognition, that can reflect on what is needed and how to achieve it.

insight • 1 month ago • Via Gary Marcus on AI • en.wikipedia.org

Hope for Change. Gary Marcus hopes that people will take what Gates said seriously.

insight • 1 month ago • Via Gary Marcus on AI •

Neurosymbolic AI's Potential. Neurosymbolic AI has long been an underdog; in the end, I expect it to come from behind and be essential.

insight • 1 month ago • Via Gary Marcus on AI •

Importance of Symbols. I don’t think metacognition can work without bringing explicit symbols back into the mix; they seem essential for high-level reflection.

insight • 1 month ago • Via Gary Marcus on AI •

Funding Concerns. Spending upwards of 100 billion dollars on the current approach seems wasteful if it's unlikely to get to AGI or ever be reliable.

insight • 1 month ago • Via Gary Marcus on AI •

Skepticism on AGI. Many tech leaders have discovered that the best way to raise valuations is to hint that AGI is imminent.

insight • 1 month ago • Via Gary Marcus on AI •

Need for Robust Software. Tech giants need serious commitment to software robustness.

insight • 1 month ago • Via Gary Marcus on AI •

Distress Over Regulation. Gary Marcus is deeply distressed that certain tech leaders and investors are putting massive support behind the presidential candidate least likely to regulate software.

insight • 1 month ago • Via Gary Marcus on AI •

AI Regulation Concerns. An unregulated AI industry is a recipe for disaster.

insight • 1 month ago • Via Gary Marcus on AI •

Shortsighted Innovation. Rushing innovative tech without robust foundations seems shortsighted.

insight • 1 month ago • Via Gary Marcus on AI •

Generative AI Limitations. Leaving more and more code writing to generative AI, which grasps syntax but not meaning, is not the answer.

insight • 1 month ago • Via Gary Marcus on AI • t.co

Black Box AI Issues. Chasing black box AI, difficult to interpret, and difficult to debug, is not the answer.

insight • 1 month ago • Via Gary Marcus on AI •

AI Engineering Techniques. As Ernie Davis and I pointed out in Rebooting AI, five years ago, part of the reason we are struggling with AI in complex AI systems is that we still lack adequate techniques for engineering complex systems.

insight • 1 month ago • Via Gary Marcus on AI •

Structural Integrity Lacking. Twenty years ago, Alan Kay said 'Most software today is very much like an Egyptian pyramid with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves.'

data point • 1 month ago • Via Gary Marcus on AI •

Software Reliability Needed. The world needs to up its software game massively. We need to invest in improving software reliability and methodology, not rushing out half-baked chatbots.

recommendation • 1 month ago • Via Gary Marcus on AI •

Getting Started with Prompt Testing. Integrating prompt testing into your development workflow is easy.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Integrating Prompt Testing. By running prompt tests regularly, we can catch issues early and ensure that prompts continue to perform well as you make changes and as the underlying LLMs are updated.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Evaluating LLM Outputs. Promptfoo offers various ways to evaluate the quality and consistency of LLM outputs.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Time Savings. Prompt testing saves time in the long run by catching bugs early and preventing regressions.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Introduction to Prompt Testing. Prompt testing is a technique specifically designed for testing LLMs and generative AI systems, allowing developers to write meaningful tests and catch issues early.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Testing Necessity. New LLM models are released, existing models are updated, and the performance of a model can shift over time.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Importance of Testing. LLMs can generate nonsensical, irrelevant, or even biased responses.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Newsletter Growth. Help me democratize the most important ideas in AI Research and Engineering to over 100K readers weekly.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Expert Contributions. In the series Guests, I will invite these experts to come in and share their insights on various topics that they have studied/worked on.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Conclusion on Testing. Prompt testing provides a way to write meaningful tests for these systems, helping catch issues early and save significant time in the development process.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Overemphasis on Models. A common mistake that teams make is to overemphasize the importance of models and underestimate how much the addition of simple features can contribute to performance.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

MLOps Investment. Investing in MLOps enables the development of 10x teams, which are more powerful in the long run.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

ML Engineer Tasks. ML engineers engage in four key tasks: data collection and labeling, feature engineering and model experimentation, model evaluation and deployment, and ML pipeline monitoring and response.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Sustaining Model Performance. Maintaining models post-deployment requires deliberate practices such as frequent retraining on fresh data, having fallback models, and continuous data validation.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Simplicity in Models. Prioritizing simple models and algorithms over complex ones can simplify maintenance and debugging while still achieving desired results.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Product-Centric Metrics. Evaluate models based on metrics aligned with business goals, such as click-through rate or user churn, to ensure they deliver tangible value.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Dynamic Validation. Continuously update validation datasets to reflect real-world data and capture evolving patterns, ensuring accurate performance assessments.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Active Model Evaluation. Keeping models effective requires active and rigorous evaluation processes.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Three Vs of MLOps. Success in MLOps hinges on three crucial factors: Velocity, Validation, and Versioning.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

MLOps Importance. Organizations often underestimate the importance of investing in the right MLOps practices.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Frequent Retraining. Regularly retraining models on fresh, labeled data helps mitigate performance degradation caused by data drift and evolving user behavior.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Collaborative Success. Successful project ideas often stem from collaboration with domain experts, data scientists, and analysts.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Anti-Patterns in MLOps. Several anti-patterns hinder MLOps progress, including the mismatch between industry needs and classroom education.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Documenting Knowledge. To avoid this, prioritize documentation, knowledge sharing, and cross-training.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Tribal Knowledge Risks. Undocumented Tribal Knowledge can create bottlenecks and dependencies, hindering collaboration.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Reducing Alert Fatigue. Focus on Actionable Alerts: Prioritize alerts that indicate real problems requiring immediate attention.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Alert Fatigue Awareness. A common pitfall in data quality monitoring is alert fatigue.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Data Leakage Prevention. Thorough Data Cleaning and Validation: Scrutinize your data for inconsistencies, missing values, and potential leakage points.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Risks with Jupyter Notebooks. Notebooks allow you to trade simplicity + velocity for quality.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Tools and Experience. Engineers like tools that enhance their experience.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Streamline Deployments. Streamlining deployments and tools that predict end-to-end gains could minimize wasted effort.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Long Tail of ML Bugs. Debugging ML pipelines presents unique challenges due to the unpredictable and often bespoke nature of bugs.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Handling Data Errors. These can be addressed by developing/buying tools for real-time data quality monitoring and automatic tuning of alerting criteria.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Data Error Handling. ML engineers face challenges in handling a spectrum of data errors, such as schema violations, missing values, and data drift.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Development-Production Mismatch. There are discrepancies between development and production environments, including data leakage; differing philosophies on Jupyter Notebook usage; and non-standardized code quality.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

ML Engineering Tasks. The 4 major tasks that an ML Engineer works on.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Audience Engagement. Help me democratize the most important ideas in AI Research and Engineering to over 100K readers weekly.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Machine Learning Breakdown. In my series Breakdowns, I go through complicated literature on Machine Learning to extract the most valuable insights.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

AI Made Simple Community. We started an AI Made Simple Subreddit.

data point • 1 month ago • Via Artificial Intelligence Made Simple • www.reddit.com

Saudi Arabia's Neom Project. The Saudi government had hoped to have 9 million residents living in 'The Line' by 2030, but this has been scaled back to fewer than 300,000.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Fractal Molecule Discovery. Researchers from Germany, Sweden, and the UK have discovered an enzyme produced by a single-celled organism that can arrange itself into a fractal.

data point • 1 month ago • Via Artificial Intelligence Made Simple • www.sciencealert.com

Software Design Principles. During the design and implementation process, I found that the following list of 'rules' kept coming back up over and over in various scenarios.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

C*-Algebraic ML. Looks like more and more people are looking to integrate Complex numbers into Machine Learning.

insight • 1 month ago • Via Artificial Intelligence Made Simple • arxiv.org

Generative AI Insights. Some really good insights on building Gen AI LinkedIn.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple • www.linkedin.com

LLM Reading Notes. The May edition of my LLM reading note is out.

data point • 1 month ago • Via Artificial Intelligence Made Simple • www.linkedin.com

Drug Design Transformation. We hope AlphaFold 3 will help transform our understanding of the biological world and drug discovery.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

AlphaFold 3 Predictions. In a paper published in Nature, we introduce AlphaFold 3, a revolutionary model that can predict the structure and interactions of all life’s molecules with unprecedented accuracy.

data point • 1 month ago • Via Artificial Intelligence Made Simple • www.nature.com

Spotlight on Aziz. Mohamed Aziz Belaweid writes the excellent, 'Aziz et al. Paper Summaries', where he summarizes recent developments in AI.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple • azizbelaweid.substack.com

AI Education Support. Your generosity is crucial to keeping our cult free and independent- and in helping me provide high-quality AI Education to everyone.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

AlphaFold 3 Innovation. Google's AlphaFold 3 is gaining a lot of attention for its potential to revolutionize bio-tech. One of the key innovations that led to its performance gains over previous methods was its utilization of diffusion models.

insight • 1 month ago • Via Artificial Intelligence Made Simple • blog.google

Efficient Time Series Imputation. CSDI, using score-based diffusion models, improves upon existing probabilistic imputation methods by capturing temporal correlations.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Emerging LLM Techniques. Microsoft's GENIE achieves comparable performance with state-of-the-art autoregressive models and generates more diverse text samples.

data point • 1 month ago • Via Artificial Intelligence Made Simple • dl.acm.org

Language Processing Potential. Text Diffusion might be the next frontier of LLMs, at least for specific types of tasks.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Application in Medical Imaging. Diffusion models have shown great promise in reconstructing Medical Images.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Step-by-Step Control. The step-by-step generation process in diffusion models allows users to exert greater control over the final output, enabling greater transparency.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Versatility of DMs. Diffusion models are applicable to a wide range of data modalities, including images, audio, molecules, etc.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

High-Quality Generation. Diffusion models generate data with exceptional quality and realism, surpassing previous generative models in many tasks.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Diffusion Models Explained. Diffusion Models are generative models that follow 2 simple steps: First, we destroy training data by incrementally adding Gaussian noise. Training consists of recovering the data by reversing this noising process.

insight • 1 month ago • Via Artificial Intelligence Made Simple • substackcdn.com

Greenwashing Example. Europe’s largest oil and gas company Shell was accused of selling millions of carbon credits tied to CO2 removal that never took place.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Share Interesting Content. The goal is to share interesting content with y’all so that you can get a peek behind the scenes into my research process.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Venture Capital Overview. A great overview by Rubén Domínguez Ibar about how Venture Capital make decisions.

insight • 1 month ago • Via Artificial Intelligence Made Simple • www.linkedin.com

Meta Llama-3 Release. Our first agent is a finetuned Meta-Llama-3-8B-Instruct model, which was recently released by Meta GenAI team.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Deep Learning Method Spotlight. The DSDL framework significantly outperforms other dynamical and deep learning methods.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Fungal Computing Potential. Unlock the secrets of fungal computing! Discover the mind-boggling potential of fungi as living computers.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Gaming and Chatbots. Limited Risk AI Systems like chatbots or content generation require transparency to inform users they are interacting with AI.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

High-Risk AI Systems. High-Risk AI Systems are involved in critical sectors like healthcare, education, and employment, where there's a significant impact on people's safety or fundamental rights.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

AI Regulation Insight. The regulation is primarily based on how risky your use case is rather than what technology you use.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Upcoming Articles Preview. Curious about what articles I’m working on? Here are the previews for the next planned articles.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Community Spotlight Resource. Kiki's Bytes is a super fun YouTube channel that covers various System Design case studies.

insight • 1 month ago • Via Artificial Intelligence Made Simple • www.youtube.com

Pay What You Can. We follow a 'pay what you can' model, which allows you to support within your means.

data point • 1 month ago • Via Artificial Intelligence Made Simple • artificialintelligencemadesimple.substack.com

Credit Scoring Adaptation. Factors that predicted high creditworthiness a few years ago might not hold true today due to changing economic conditions or consumer behavior.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Neural Networks Versatility. Thanks to their versatility, Neural Networks are a staple in most modern Machine Learning pipelines.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Evolving Language Models. Language Models trained on social media data need to adapt to constantly evolving language use, slang, and emerging topics.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Simplifying Data Augmentation. Before you decide to get too clever, consider the statement from TrivialAugment- the simplest method was so-far overlooked, even though it performs comparably or better.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Gradient Reversal Layer. The gradient reversal layer acts as an identity function during the forward pass but reverses gradients during backpropagation, creating a minimax game between the feature extractor and the domain classifier.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Impact on Sentiment Analysis. Our experiments on a sentiment analysis classification benchmark... show that our neural network for domain adaption algorithm has better performance than either a standard neural network or an SVM.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Adversarial Training Process. Domain-Adversarial Training (DAT) involves training a neural network with two competing objectives: to accurately perform the main task and to confuse a domain classifier that tries to distinguish between source and target domain data.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

The Role of DANN. DANNs theoretically attain domain invariance by learning domain-invariant features.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Mitigating Distribution Shift. Good data + adversarial augmentation + constant monitoring works wonders.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Sources of Distribution Shift. Possible sources of distribution shift include sample selection bias, non-stationary environments, domain adaptation challenges, data collection and labeling issues, adversarial attacks, and concept drift.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Understanding Distribution Shift. Distribution shift, also known as dataset shift or covariate shift, is a phenomenon in machine learning where the statistical distribution of the input data changes between the training and deployment environments.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Improving Generalization. There are several ways to improve generalization such as implementing sparsity and/or regularization to reduce overfitting and applying data augmentation to mithridatize your models.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Challenges in Neural Networks. There are several underlying issues with the training process that scale does not fix, chief amongst them being distribution shift and generalization.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Social Media Awareness. Epicurean philosophy is a good reminder to keep vigilant about how we’re being influenced by the constant subliminal messaging and to only pursue the pleasures that we want for ourselves.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Reading Recommendation. The plan is to do one of these a month as a special reading recommendation.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Happiness Through Simplicity. True happiness doesn’t come from endlessly chasing pleasure, but from systematically eliminating the sources of our unhappiness.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Self-Reflection Necessity. A good community directly benefits self-reflection.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Community and Introspection. Epicurus encouraged his followers to form close-knit communities that allow their members to step back and help each other critically analyze the events around them.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Friendship Statistics. People with no friends or poor-quality friendships are twice as likely to die prematurely, according to Holt-Lunstad's meta-analysis of more than 308,000 people.

data point • 1 month ago • Via Artificial Intelligence Made Simple • doi.org

Friendship Importance. Epicurus has a particularly strong emphasis on the importance of friendship as a must for a happy life.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Epicurean Philosophy. Epicurean philosophy is based on a simple supposition: we are happy when we remove the things that make us unhappy.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Next-Gen Embeddings. Today we will primarily be looking at 4 publications to look at how we can improve embeddings by exploring a dimension that has been left untouched- their angles.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Greater Performance Gains. AnglE consistently outperforms SBERT, achieving an absolute gain of 5.52%.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

AnglE Optimization. AnglE optimizes not only the cosine similarity between texts but also the angle to mitigate the negative impact of the saturation zones of the cosine function on the learning process.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Contrastive Learning Impact. Contrastive Learning encourages similar examples to have similar embeddings and dissimilar examples to have distinct embeddings.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Modeling Relations. RotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space uses complex numbers for knowledge graph embedding.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Complex Geometry Advantage. The complex plane provides a richer space to capture nuanced relationships and handle outliers.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Orthogonality Benefits. Orthogonality helps the model to capture more nuanced relationships and avoid unintended correlations between features.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Angular Representation. Focusing on angles rather than magnitudes avoids the saturation zones of the cosine function, enabling more effective learning and finer semantic distinctions.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Saturation Zones. The saturation zones of the cosine function can kill the gradient and make the network difficult to learn.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Challenges in Embeddings. Current Embeddings are held back by three things: Sensitivity to Outliers, Limited Relation Modeling, and Inconsistency.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Enhancing NLP. Good Embeddings allow three important improvements: Efficiency, Generalization, and Improved Performance.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

LLMs Hitting Wall. This is what leads to the impression that "LLMs are hitting a wall".

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Critical Flaws. Such developments have 3 inter-related critical flaws: They mostly work by increasing the computational costs of training and/or inference, they are a lot more fragile than people realize, and they are incredibly boring.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Research Areas. A lot of current research focuses on LLM architectures, data sources prompting, and alignment strategies.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Client Payment Process. I am monetizing this newsletter through my employer- SVAM International (USA Work Laws bar me from taking money from anyone who is not my employer).

insight • 1 month ago • Via Artificial Intelligence Made Simple • artificialintelligencemadesimple.substack.com

Change in Payout Schedule. I’ve switched the payout schedule to monthly to ensure that I always have a buffer in my Stripe Account to handle issues like this.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Mental Space for Writing. Writing/Research takes a lot of mental space, and I don’t think I could do a good job if I was constantly firefighting these issues.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Communication Efforts. I have started communication with both the reader, my company, and Stripe/Bank.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Long Review Process. I have been told the review by the bank could take up to 3 months.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Stripe's Negative Balance Policy. Stripe does not let you use future deposits to settle balances, which makes sense from their perspectives but leaves me in this weird situation.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Stripe Payouts Paused. Due to all of this, Stripe has paused all my payouts.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Financial Loss. I lose money on every fraud claim. In this case, Stripe has removed 70 USD from my Stripe account: 50 for the base plan + 20 in fees.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Fraudulent Claim Issue. Unfortunately, one of the readers missed this. They signed up for a 50 USD/year plan and marked that transaction as fraudulent, causing complications.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Indefinite Pause. AI Made Simple will be going on an indefinite pause now.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

KAN Overview. This article will explore KANs and their viability in the new generation of Deep Learning.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Kolmogorov-Arnold Representation. The KART states that any continuous function with multiple inputs can be created by combining simple functions of a single input (like sine or square) and adding them together.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Educational Importance. Even if we find fundamental limitations that make KANs useless, studying them in detail will provide valuable insights.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Grid Extension Technique. The grid extension technique allows KANs to adapt to changes in data distribution by increasing the grid density during training.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Spline Usage. KANs use B-splines to approximate activation functions, providing accuracy, local control, and interpretability.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Interactive KANs. Users can collaborate with KANs through visualization tools and symbolic manipulation functionalities.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Explainability Benefits. KANs are more explainable, which is a big plus for sectors where model transparency is critical.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Accuracy of KANs. KANs can achieve lower RMSE loss with fewer parameters compared to MLPs for various tasks.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Performance and Training. KAN training is 10x slower than NNs which may limit their adoption in more mainstream directions that are dominated by scale.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Sparse Compositional Structures. A function has a sparse compositional structure when it can be built from a small number of simple functions, each of which only depends on a few input variables.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

KAN Advantages. KANs use learnable activation functions on edges, which makes them more accurate and interpretable, especially useful for functions with sparse compositional structures.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Need for Public Dialogue. Encouraging open dialogue and debate fosters critical thinking, raising awareness about oppression and empowering individuals to resist manipulation.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Challenge Comfort with Beliefs. Having good-faith conversations and the willingness to challenge deeply held beliefs is essential to fight dogma and ensure a society of free individuals.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

AI Structural Concerns. The push for AI alignment by corporations may suppress inconvenient narratives, illustrating a paternalistic approach to technology.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Technology and Risk. The lack of risk judgment and decision-making training is prevalent across roles and professions that most need it, revealing gaps in corporate risk management.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Current Gen Z Struggles. 67% of people 18 to 34 feel 'consumed' by their worries about money and stress, making it hard to focus, as part of the Gen Z mental health crisis.

data point • 1 month ago • Via Artificial Intelligence Made Simple • www.wsav.com

Societal Symptoms. Being 'busy with work' has become a default way for people to spend their time, symptomatic of what Arendt called the 'victory of the animal laborans.'

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Banality of Evil. Arendt argued that Adolf Eichmann's participation in the Holocaust was driven by thoughtlessness and blind obedience to authority, reflecting the concept of 'Banality of Evil.'

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Totalitarianism Origins. Arendt argued that totalitarianism was a new form of government arising from the breakdown of traditional society and an increasingly ungrounded populace.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

The Active Life Components. Hannah Arendt broke life down into 3 kinds of activities: Labor, Work, and Action, emphasizing that modern society deprioritizes the latter two.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Hannah Arendt Insights. Hannah Arendt was a 20th-century political theorist, well known for her thoughts on the nature of evil, the rise of totalitarianism, and her strong emphasis on the importance of living the 'active life.'

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Red-teaming Purpose. Red-teaming/Jailbreaking is a process in which AI people try to make LLMs talk dirty to them.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

ACG Effectiveness. In the time that it takes ACG to produce successful adversarial attacks for 64% of the AdvBench set, GCG is unable to produce even one successful attack.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

ACG Methodology. The Accelerated Coordinate Gradient (ACG) attack method combines algorithmic insights and engineering optimizations on top of GCG to yield a ~38x speedup.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Haize Labs Automation. Haize Labs seeks to rigorously test an LLM or agent with the purpose of preemptively discovering all of its failure modes.

insight • 1 month ago • Via Artificial Intelligence Made Simple • haizelabs.com

Shift in Gender Output. The base model generates approximately 80% male and 20% female customers while the aligned model generates nearly 100% female customers.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Bias Distribution Changes. The alignment process would likely create new, unexpected biases that were significantly different from your baseline model.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Lower Output Diversity. Aligned models exhibit lower entropy in token predictions, form distinct clusters in the embedding space, and gravitate towards 'attractor states', indicating limited output diversity.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

LLM Understanding. People often underestimate how little we understand about LLMs and the alignment process.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Adversarial Attack Generalization. The attack didn’t apply to any other model (including the base GPT).

insight • 1 month ago • Via Artificial Intelligence Made Simple •

High Cost of Red-teaming. Good red-teaming can be very expensive since it requires a combination of domain expert knowledge and AI person knowledge for crafting and testing prompts.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Low Safety Checks. Many of them are too dumb: The prompts and checks for what is considered a 'safe' model is too low to be meaningful.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Subscriber Growth. Help me democratize the most important ideas in AI Research and Engineering to over 100K readers weekly.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

TechBio Resources. We have a strong bio-tech focus this week b/c of all my reading into that space.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Legal AI Evaluation. We argue that this claim is not supported by the current evidence, diving into AI’s roles in various legal tasks.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Python Precision Issues. Python compares the integer value against the double precision representation of the float, which may involve a loss of precision, causing these discrepancies.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Model Performance Challenge. We demonstrate here a dramatic breakdown of function and reasoning capabilities of state-of-the-art models trained at the largest available scales.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Deep Learning Insight. This paper presents a framework, HypOp, that advances the state of the art for solving combinatorial optimization problems in several aspects.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

AI-Relations Trend. The ratio of people who reach out to me for AIRel vs ML roles has gone up significantly over the last 2–3 months.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Community Engagement. If you/your team have solved a problem that you’d like to share with the rest of the world, shoot me a message and let’s go over the details.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Reading Inspired. I figured I’d start sharing whatever AI Papers/Publications, interesting books, videos, etc I came across each week.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Content Focus. While the focus will be on AI and Tech, the ideas might range from business, philosophy, ethics, and much more.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

GPU Efficiency. We also provide a GPU-efficient implementation of this model which reduces memory usage by up to 61% over an unoptimized baseline during training.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Training Efficiency Improvements. To counteract smaller gradients due to ternary weights, larger learning rates than those typically used for full-precision models should be employed.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Learning Rate Strategy. For the MatMul-free LM, the learning dynamics necessitate a different learning strategy, maintaining the cosine learning rate scheduler and then reducing the learning rate by half.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Matrix Multiplication Bottleneck. Matrix multiplications (MatMul) are a significant computational bottleneck in Deep Learning, and removing them enables the creation of cheaper, less energy-intensive LLMs.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Memory Transfer Optimization. The Fused BitLinear Layer eliminates the need for multiple data transfers between memory levels, significantly reducing overhead.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Fused BitLinear Layer. The Fused BitLinear Layer combines operations and reduces memory accesses, significantly boosting training efficiency and lowering memory consumption.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Linear Layer Efficiency. Replacing non-linear operations with linear ones can boost your parallelism and simplify your overall operations.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Simplified Operations. The secret to their great performance rests on a few innovations that follow two major themes- simplifying expensive computations and replacing non-linearities with linear operations.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Cost Reduction Strategies. The core idea includes restricting weights to the values {-1, 0, +1} to replace multiplications with simple additions or subtractions.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Performance Comparison. MatMul-Free LLMs (MMF-LLMs) achieve performance on-par with state-of-the-art Transformers that require far more memory during inference at a scale up to at least 2.7B parameters.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Generational Perspective. I am a Gen Z kid who grew up with technology.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Emotional Intelligence. Develop VCSAs to incorporate emotional intelligence to enhance user engagement and satisfaction.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Control Mechanisms. Ensure that VCSAs include features that give users a sense of control and the ability to communicate successfully with their devices.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Design for Imperfection. Design VCSAs to exhibit some level of imperfection to create relaxed interactions.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Managerial Implications. Encourage Partner-like interactions: use speech acts and algorithms to promote the perception of VCSAs as partners.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Partner Relationship. The perception of the relationship with the VCSA as a real partner attributes a distinct personality to the VCSA, making it an appealing entity.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Master Relationship. Some perceived the VCSA as a master, feeling like servants bound by its rules and unpredictable nature.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Servant Relationship. Young consumers frequently envisioned their VCSA as a servant that helps consumers realize their tasks.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Types of Relationships. From the results of the study three different relationships emerge: servant-master dynamic, dominant entity, and equal partners.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Controls and Preferences. Consumers may relate to anthropomorphized products either as others or as extensions of their self.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Self-extension Theory. If you think about the influence that particularly valuable products have on you, you increasingly consider them extensions of yourself.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Uncanny Valley. The Uncanny Valley represents clearly how different degrees of anthropomorphism can change our feelings and attitudes toward technologies and AI assistants.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Anthropomorphism Effects. Evidence shows that anthropomorphized products can enhance consumer preference, make products appear more vivid, and increase their perceived value.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Anthropomorphism Concept. Today's scholars focus on the broad concept of anthropomorphism: essentially, it is humans' tendency to perceive humanlike agents in nonhuman entities and events.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

VCSAs Definition. Alexa, Google Home, and similar devices fall into the category of so-called 'voice-controlled smart assistants' (VCSAs).

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Marriage Proposals. A good portion of those even said they would marry her.

data point • 1 month ago • Via Artificial Intelligence Made Simple • www.vocativ.com

Alexa Love. Amazon reported that half a million people told Alexa they loved her.

data point • 1 month ago • Via Artificial Intelligence Made Simple • www.geekwire.com

Human-like Interactions. When we interact with devices like Alexa or Google Home, we have different ways of thinking about ourselves and we relate to them differently from other people.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Skepticism on Technology. While I can’t imagine my life without tech, most of the activities that I enjoy are physical that would be very hard to simulate adequately.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

AI-Human Relationship. The AI-human relationship dynamic is not something that I know much about.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Weekly Reach. Help me democratize the most important ideas in AI Research and Engineering to over 100K readers weekly.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

AI Expertise Invitation. In the series Guests, I will invite these experts to come in and share their insights on various topics that they have studied/worked on.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Choco Milk Cult. Our chocolate milk cult has a lot of experts and prominent figures doing cool things.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Generative AI Commercialization Struggles. Close to 2 years since the release of ChatGPT, organizations have struggled to commercialize on the promise of the Generative AI.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Data Contextuality in Healthcare Algorithms. A bombshell study found that a clinical algorithm many hospitals were using to decide which patients need care was showing racial bias.

data point • 1 month ago • Via Artificial Intelligence Made Simple • www.aclu.org

AGI and Reduction of Information. The implication of this on generalized intelligence is clear. Reducing the amount of information to focus on what is important to a clearly defined problem is antithetical to generalization.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Contextual Nature of Data. Good or bad data is defined heavily by the context.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Statistical Proxy Limitations. Within any dataset is an implicit value judgment of what we consider worth measuring.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Good Data Removes Noise. Good Data Doesn’t Add Signal; it Removes Noise.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Skepticism About Generalized Intelligence. Ultimately, my skepticism around the viability of 'generalized intelligence' emerging by aggregating comes from my belief that there is a lot about the world and its processes that we can’t model within data.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Issues with Self-Driving Cars. Self-driving cars do find merges challenging.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

AI Flattens Data Analysis. AI Flattens: By its very nature, AI works by abstracting the commonalities.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Data-Driven vs Mathematical Insights. My thesis can be broken into two parts. Firstly, I argue that Data-Driven Insights are a subclass of mathematical insights.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Yann LeCun's AGI Claim. Yann LeCunn has made headlines with his claims that 'LLMs are an off-ramp to AGI.'

insight • 1 month ago • Via Artificial Intelligence Made Simple •

AI's PR Campaign. This has led to a massive PR campaign to rehab AI's image and prepare for the next round of fundraising.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

AI's Financial Cost for Microsoft. This is costing Microsoft more than $650 million.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Inflection AI's Revenue Failure. Inflection AI’s revenue was, in the words of one investor, “de minimis.” Essentially zilch.

data point • 1 month ago • Via Artificial Intelligence Made Simple • www.nytimes.com

Impacts of FoodTech. The impact of food-related sciences is immense, proving that food is not just a basic necessity but a pivotal element in saving lives.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

AI Market Hype. AI has many useful use cases, but it’s important to not allow yourself to get manipulated by people trying to piggy back off successful projects to sell their hype.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Knowledge Distillation. Knowledge distillation is a model training method that trains a smaller model to mimic the outputs of a larger model.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Security Challenges. Demand for high-performance chips designed specifically for AI applications is spiking.

insight • 1 month ago • Via Artificial Intelligence Made Simple • safeesteem.substack.com

AI Tokenization Method. The tokenizer for Claude 3 and beyond handles numbers quite differently to its competitors.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Reading Interest. If you want to keep your finger on your pulse for the tech-bio space, she’s an elite resource.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple • marinatalamanou.substack.com

Technical Insight Source. Hai doesn’t shy away from talking about the Math/Technical Details, which is a rarity on LinkedIn.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Spotlight on Expertise. Hai Huang is a Senior Staff Engineer at Google, working on their AI for productivity projects.

data point • 1 month ago • Via Artificial Intelligence Made Simple • www.linkedin.com

Community Engagement. We started an AI Made Simple Subreddit.

data point • 1 month ago • Via Artificial Intelligence Made Simple • www.reddit.com

Reading Recommendations. I figured I’d start sharing whatever AI Papers/Publications, interesting books, videos, etc. I came across each week.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Subscriber Goal. Help me democratize the most important ideas in AI Research and Engineering to over 100K readers weekly.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Curated Insights. In issues of Updates, I will share interesting content I came across.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Key ACI Properties. ACIs should prioritize actions that are straightforward and easy to understand to minimize the need for extensive demonstrations or fine-tuning.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Guest Contributions. In the series Guests, I will invite experts to share their insights on various topics that they have studied/worked on.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Improving Error Recovery. Implementing guardrails, such as a code syntax checker that automatically detects mistakes, can help prevent error propagation and assist agents in identifying and correcting issues promptly.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

SWE-Bench Overview. SWE-bench is a comprehensive evaluation framework comprising 2,294 software engineering problems sourced from real GitHub issues and their corresponding pull requests across 12 popular Python repositories.

data point • 1 month ago • Via Artificial Intelligence Made Simple • www.swebench.com

SWE-Agent Performance. When using GPT-4 Turbo as the base LLM, SWE-agent successfully solves 12.5% of the 2,294 SWE-bench test issues, significantly outperforming the previous best resolve rate of 3.8%.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Effective ACI Design. By designing effective ACIs, we can harness the power of language models to create intelligent agents that can interact with digital environments in a more intuitive and efficient manner.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Agility in Code Editing. The experiments reveal that agents are sensitive to the amount of content displayed in the file viewer, and striking the right balance is essential for performance.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

SWE-Agent Functionalities. SWE-Agent offers commands that enable models to create and edit files, streamlining the editing process into a single command that facilitates easy multi-line edits with consistent feedback.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Optimizing Agent Interfaces. Human user interfaces may not always be the most suitable for agent-computer interactions, calling for improved localization through faster navigation and more informative search interfaces tailored to the needs of language models.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Deepfake Market Growth. Deepfake-related losses are expected to soar from $12.3 billion in 2023 to $40 billion by 2027, growing at an astounding 32% compound annual growth rate.

data point • 1 month ago • Via Artificial Intelligence Made Simple • www2.deloitte.com

Adversarial AI Rise. Deepfakes typify the cutting edge of adversarial AI attacks, achieving a 3,000% increase last year alone; incidents are projected to rise by 50% to 60% in 2024.

data point • 1 month ago • Via Artificial Intelligence Made Simple • www.vpnranks.com

Enterprise Security Concerns. 60% of CISOs, CIOs, and IT leaders are afraid their enterprises are not prepared to defend against AI-powered threats and attacks.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Detection Strategy Development. Our goal is to classify an input image into one of three categories real, deep-fake, and ai-generated, which helps organizations catch Deepfakes amidst enterprise frauds.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Affordable Detection Solutions. Many cutting-edge Deepfake Detection setups are too costly to run at scale, severely limiting their utility in high-scale environments like Social Media.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Model Performance. Our best models scored very good results—top models achieving 0.93 (SVC), 0.82 (RandomForest), and 0.8 (XGBoost) respectively.

data point • 1 month ago • Via Artificial Intelligence Made Simple •

Deepfake Detection Collaboration. If your organization deals with Deepfakes, reach out to customize the baseline solution to meet your specific needs.

recommendation • 1 month ago • Via Artificial Intelligence Made Simple •

Social Media Influence. AI models are starting to gain a lot of popularity online, with some influencers earning significant incomes.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

Early Project Insights. We were good at the main task but had terrible generalization and robustness.

insight • 1 month ago • Via Artificial Intelligence Made Simple •

AI Functionality Potential. We believe this process creates artifacts or fingerprints that ML models can detect.

insight • 1 month ago • Via Artificial Intelligence Made Simple •