Newsletter
Highlights:
OpenAI will release a new AI model called “Strawberry” in the fall with advanced reasoning capabilities.
California’s artificial intelligence regulation bill has passed the State Assembly.
An AI model called GameNGen successfully simulated the classic game Doom using reactive image generation.
ChatGPT
How Many R's in 'Strawberry'? This AI Doesn't Know (Inc)
A ChatGPT conversation about strawberries has blown up on social platforms like Reddit. The user asks ChatGPT how many R’s are in the word ‘strawberry’. ChatGPT replies with two. The user persists by trying to spell out the word letter by letter, to which the bot replies “I see what you did there! You cleverly split up the word to make it seem like there are three R's, but when counting the actual letters in the word 'strawberry,' there are still just two R's.” The user then asks the bot to spell it out for itself until the bot concedes “So, you're right. When counting in this way, there are three R's.”. On the surface, it’s a funny meme to pass around. On a deeper level, it’s proof that we should still have human checks for AI-generated information.
OpenAI Aims to Release New AI Model, ‘Strawberry,’ in Fall (PYMNTS)
OpenAI is reportedly aiming to release its next-level artificial intelligence, a product called “Strawberry", in the fall. Strawberry will be able to solve problems and tasks that are beyond the current capabilities of AI models. For example, it will solve math problems it has never encountered, develop market strategies and solve complex word puzzles.
I could not find any explanation for the name "Strawberry”, but I found it coincidental (and humorous) given the recent stir about ChatGPT’s spelling struggles.
Legal
California AI bill passes State Assembly, pushing AI fight to Newsom (WashingtonPost)
The California State Assembly passed a bill on Wednesday that would enact the nation’s strictest artificial intelligence regulations yet. The bill will now return to the state Senate, and is expected to quickly pass on to Gov. Gavin Newsom’s desk. The proposed law would require companies to test their AI technologies for risk before selling it, and if harm should occur, they could be sued by the California attorney general. The bill only applies to large and expensive AI models, and California has assured that it will not affect smaller start-ups seeking to compete with Big Tech.
Police investigate AI-generated nude photos using Lancaster County students' faces (WGAL)
A Lancaster County (PA) police department is investigating artificially generated nude photos of more than 20 high school female students. One of the girls came forward anonymously to talk about how she found out from another girl that a boy created the images and then shared them through Discord with several other boys from the school. Despite how worried the girls are that this could affect their future college and job prospects, there simply aren’t any laws in place currently to press charges. The school hasn’t done anything either thus far. Moving forward, they promise to educate students on the dangers of social media and artificial intelligence, as well as change the student handbook to make the misuse of AI a punishable offense.
A similar incident occurred in a different Pennsylvania school in July, after middle school students created fake Tiktok accounts impersonating their teachers. What is going on in PA schools?
Science
Team using AI finds a cheaper way to make green hydrogen (Phys)
Researchers at the University of Toronto are using artificial intelligence to accelerate scientific breakthroughs in the search for sustainable energy. Using an AI-generated “recipe” for a new catalyst, they have found a more efficient way to make hydrogen fuel. To create green hydrogen, you need the right alloy, or combination of metals, which are often found through trial and error. The AI program analyzed 36,000 different metal oxide combinations and recommended alloys that were effective and stable. "What would take a person years to test, the computer can simulate in a matter of days,” says Jehad Abed, one of the Ph.D. students involved.
Technology
Google Gemini will let you create AI-generated people again (The Verge)
Google is letting Gemini users generate images of people again after pulling the feature earlier this year amid reports of historically inaccurate images. The upgraded Imagen 3 model comes with built-in safeguards to prevent previous incidents, like pictures of racially diverse Nazis. Gemini also doesn’t allow photorealistic images of public figures, content involving minors, or graphic, violent, or sexual scenes. They have said “not every image Gemini creates will be perfect,” but they’ll continue to listen to feedback and improve.
New Research Finds Stark Global Divide in Ownership of Powerful AI Chips (Time)
A new peer-reviewed paper examines the presence of AI across the globe and found that GPUs (computer chips capable of AI) are highly concentrated in only 30 countries in the world. The U.S. and China lead by a sizable margin, but much of the world lies in what the authors call “Compute Deserts:” areas where there are no GPU's for hire at all. “This has implications for which countries shape AI development as well as norms around what is good, safe, and beneficial AI,” says Boxi Wu, one of the paper’s authors.
Entertainment
This AI Model Can Simulate the PC Game Doom in Real-Time (PCMag)
Researchers at Google recently used AI to simulate the 1993 classic PC shooter Doom, but without using code from the game itself. Instead, they had the AI create stills like an AI image generator does, at a rate of 20 frames per second for a playable experience. The model is called GameNGen and was created to demonstrate that a complex video game can be run on a neural network. The only caveat seemed to be that it only has access to 3 seconds of history, so objects or enemies could appear and disappear randomly.