Newsletter


Highlights:

  • Microsoft and LinkedIn release the 2024 Work Trend Index, a joint report on the state of AI at work.

  • Microsoft launched a GPT-4 LLM in an isolated environment on a government-only network - not for top-secret use yet.

  • Google DeepMind’s AI model has mapped human DNA and could help develop life-changing treatments for diseases.


Business

Microsoft announces $3.3 billion investment in Wisconsin to spur artificial intelligence innovation and economic growth (Microsoft)

Microsoft is investing in Southeast Wisconsin as a “hub for AI-powered economic activity, innovation, and job creation.” The $3.3b investment between now and the end of 2026 includes cloud computing and AI infrastructure, a manufacturing-focused AI co-innovation lab, and an AI skilling initiative for 100,000 residents. Microsoft has also partnered with National Grid to build a 250 megawatt solar project in Wisconsin to compensate for the environmental burden of the AI developments.

Microsoft and LinkedIn release the 2024 Work Trend Index on the state of AI at work (Microsoft)

Microsoft and LinkedIn compiled a joint report on the state of AI at work using their own research. The research is collected from a survey of 31,000 people across 31 countries, LinkedIn labor and hiring trends, Microsoft 365 productivity data, and research with Fortune 500 customers. The conclusion is that AI is influencing the way people work, hire, and lead. Use of generative AI at work has nearly doubled in the past 6 months. Employees have started bringing their own AI to work, whether the companies encourage it or not.

Government

Microsoft’s ‘air gapped’ AI is a bot set up to process top-secret info (The Verge)

Microsoft announced on Tuesday that it’s deployed “a GPT-4 large language model in an isolated, air-gapped environment on a government-only network.” The server is static, which means it operates without learning from the files fed to it or from the internet in general. So far, it can answer questions or write code but it has not been accredited for top-secret use at the moment.

An AI tool used in thousands of criminal cases is facing legal challenges (NBC News)

In recent years, law enforcement agencies have turned to an AI tool called Cybercheck to help investigate, charge, and convict suspects accused of serious crimes. But defense lawyers have questioned its accuracy and reliability since its methodology is “opaque” and it has not been independently vetted. The tool’s creator, Adam Mosher, responded to say that Cybercheck’s accuracy tops 90%, although he doesn’t provide details on how that accuracy is calculated.

Japan’s Kishida unveils a framework for global regulation of generative AI (AP News)

On Thursday, Japanese Prime Minister Fumio Kishida, unveiled an international framework for regulating and using generative AI. In his speech, Kishida stated “Generative AI has the potential to be a vital tool to further enrich the world… but we must also confront the dark side of AI, such as the risk of disinformation.” 49 countries and regions have signed up for the framework, titled Hiroshima AI Process Friends Group, although who exactly has not been revealed.

Science

Google DeepMind unveils next generation of drug discovery AI model (Reuters)

“AlphaFold” is an artificial intelligence model intended to “help scientists design drugs and target disease more effectively.” Google Deepmind unveiled the third major version of “AlphaFold”, which has already mapped the behavior for all of life’s molecules. This technology could help develop potentially life-changing treatments for diseases. They also released a free online tool called “AlphaFold server” that will allow scientists to test their hypothesis digitally before running real-world tests.

Entertainment

TikTok to label AI-generated content from OpenAI and elsewhere (Reuters)

Tiktok plans to label AI-generated content on its platform using a digital watermark called Content Credentials. Tiktok is among 20 tech companies that have pledged to prevent AI-generated content from interfering in the U.S. elections this fall. Tiktok already labels AI-generated content made inside the app, but the Content Credentials allows labeling for third-party creations such as from OpenAI.

ChatGPT maker OpenAI exploring how to 'responsibly' make AI erotica (NPR)

At the moment, OpenAI’s code of conduct bans sexually suggestive or explicit content, but they’re looking to change that. In a lengthy document on Wednesday, OpenAI expressed a desire to explore if they can “responsibly provide the ability to generate NSFW content in age-appropriate contexts,” in both photo and text formats. With the recent wave of deepfakes and synthetic nudes, people are rightfully skeptical if this technology can be used responsibly.

Life

When grief and AI collide: These people are communicating with the dead (CNN)

Through technologies like Snapchat’s My AI or ElevenLabs’ voice cloning, people are coping with the loss of loved ones by recreating them using AI. Experts are concerned about the ethics of making the dead speak or behave in ways they never did, but also if this hinders the grief process. For some it may provide the comfort they need to get through a difficult time, but for others it might lead to stagnating in their grief without properly coping.


New at KPI

Juliette speaks about the AI Dilemma with fellow technologist and Canadian, Mitch Joel, on Six Pixels of Separation.


Previous
Previous

Newsletter

Next
Next

Newsletter