Newsletter
Open AI signs deal with Reddit to train AI on Reddit user’s data, an AI robot gives the commencement speech at a New York college, and Sony Music Group sent letters to over 700 generative AI companies banning the use of their property without explicit licensing agreements.
Newsletter
Microsoft and LinkedIn release the 2024 Work Trend Index, Microsoft launched a GPT-4 LLM in an isolated environment on a government-only network, and Google DeepMind’s AI model has mapped human DNA and could help develop life-changing treatments for diseases.
Newsletter
Microsoft to invest $2.2 billion into Malaysian AI expansion, Anthropic launches iOS competitor to ChatGPT called “Claude AI”, and the US urges China and Russia to keep humans, not AI, in charge of nuclear weapons.
Newsletter
Meta to invest around $40 billion in AI in 2024, the first-ever US Air Force dogfight involving an AI-flown fighter jet was a success, and AI leads to medical breakthroughs in cancer drug prescriptions and gene editing research.
Three Scenarios for the Future of Generative AI
NYU graduate students imagine the future of GenAI in 2026 using their understanding of current trends in technology, business, social values, and geopolitics.
Wakeup Call to CHROs: How GenAI is Changing the Role of HR Leadership
When Netflix introduced video streaming in 2007, TV-watching habits changed dramatically. Now something similar is happening in the workplace with generative artificial intelligence (GenAI). HR leaders are on the front lines as the new ways of working evolve.
The conventional wisdom says the HR function will change to have less rote work, more strategic work, with more time and attention for people issues like talent development, recruiting, succession planning, and organizational design. But what will it actually mean?
Newsletter
Google funding $25M Euro AI training initiative in Europe, Pakistan political leader uses AI to communicate from prison, and AI-enabled smart speaker saves a man’s life.
Newsletter
OpenAI CEO seeks trillions of dollars for chip funding, Google introduces AI assistant Gemini in new mobile app, and the US Government creates an AI safety consortium.
Newsletter
Meta to label AI-generated images on Facebook and Instagram, a first-of-its-kind AI heist, and a scientific breakthrough in deciphering ancient scrolls.
What is AI, again?
This year, many of the fears and hopes associated with artificial intelligence are coming to pass. It’s easy to overstate the fears — and to underestimate the risks. Let’s clarify them by remembering exactly what this technology is — and what it is not.
Newsletter
Amazon has a new chatbot, governments seek to crack down on “deepfakes” in light of recent events, and AI is spicing up the advertising scene on social media.
Automation Complacency: How to Put Humans Back in the Loop
How, then, can the developers of automated systems solve this dilemma, so that experiments like the one taking place in San Francisco end positively? The answer is extra diligence not just before the moment of impact, but at the early stages of design and development. All AI systems involve risks when they are left unchecked. Self-driving cars will not be free of risk, even if they turn out to be safer, on average, than human-driven cars.
Cade Metz and Arthur Koestler
The images, of Yann LeCun, Geoffrey Hinton, and Yoshua Bengio, are from Genius Makers by Cade Metz
The coders and creators whose names appear in headlines today – people like LeCun, Hinton, and Bengio – may be reshaping our world, but it was hardly certain at the outset that their version of AI, with unsupervised code learning from random experience, would prevail. Genius Makers reminds me of another great book about breakthrough pathfinders with an uncertain future. Arthur Koestler’s The Sleepwalkers, originally published in 1959, focuses on three 15th-century luminaries: the timid Nicholas Copernicus, the hapless “cosmic architect” Johannes Kepler, and the scientific pioneer and heretic Galileo Galilei.
A Rapid Return from Inflation
A “Bright Swan” scenario of how inflation might turn out to be a short-term phenomenon.
An Unexpected Route to Responsible AI
A “Bright Swan” scenario of how democratizing software engineering can lead to a less frightening future for machine learning and AI bias.
Bright Swans: Hopeful Scenarios for Bleak Futures
“Bright Swans” are unexpected events, like “black swans” — that appear from out of nowhere and make things better. Such events do exist - but they aren’t always noticed. When thinking about the future, we should look out for bright swan possibilities. Sometimes we can help them come to pass.
Who’s watching the watchrobots?
What artificial intelligence really needs is a code of ethics: not just for the major investors and corporate leaders, but for the many people working as software developers and project managers.
Here’s the code of ethics we propose in our forthcoming book for Penguin/Berrett-Koehler (2023): Who Watches the Watchrobots?
1. Do No Harm.
2. Create Code that Speaks for Itself.
3. Reclaim Data Rights for People.
4. Use Machine Learning for Human Understanding.
5. Question and Confront Bias.
6. Hold All Stakeholders Accountable.
7. Embrace Creative Friction.
Scenario: How will our lives change if the pandemic lasts for years?
Scenarios of 2030: based on the Future of Media course at New York University
What if continuous new variants keep the pandemic in place? This will shape the metaverse as an alternative to face-to-face restrictions. Several trends will combine: climate change-related extreme weather, the aftermath of the Ukraine war, and the rise of AI, to increase the uncertainty of daily life. How will people cope?
What business readers most want to read
Most valued by business readers is credibility. Second is explanatory power: the ability to show how the world works, in a way that wasn’t obvious before. Next is rhetorical skill. Readers want to be carried away, immersed in a story, so that the task of reading no longer feels like a burden.
These qualities all rank higher than evidence, practical value, impact on peoples’ actual lives, or inspiring visionary messages. This post tells how we learned this.