At KeepAISafe.org, we are dedicated to educating and empowering stakeholders in the realm of artificial intelligence (AI) to ensure the safe and ethical development and deployment of AI technologies. Our mission is grounded in the belief that while AI holds immense potential for societal advancement, it also introduces complex challenges that necessitate informed and conscientious oversight. We aim to serve as a pivotal resource for knowledge, guidance, and collaboration, fostering a community committed to navigating the intricacies of AI with a vigilant and ethical approach. Through education, advocacy, and active engagement, we strive to catalyze positive change and ensure AI serves the greater good of humanity.
Our Vision
We envision a future where AI systems are designed with the utmost consideration for human values, safety, and the long-term impacts on our world. We strive to be a leading voice in the conversation around AI safety, advocating for responsible practices and educating the public on the importance of safeguarding our digital future.
What We Do
Join us in our journey to Keep AI Safe. Together, we can shape a future where technology serves humanity in the best way possible.
You’re reading The Briefing, Michael Waldman’s weekly newsletter. Click here to receive it in your inbox.
There’s something scary online involving Kari Lake — and it’s not what you might expect.
The nonprofit journalism site Arizona Agenda has a minute-long video from the TV news anchor turned GOP candidate, praising the site’s work . . . and then halfway through, revealing that it is all a deepfake. Watch it here. Especially, watch it on a phone, where the glitches are less noticeable. This is new, and unnerving, and ominous.
It is now less than two years since ChatGPT was released, and the world began to debate how much change advances in generative artificial intelligence would bring. Are they like Gutenberg’s Bible, made possible by the new technology of the printing press? Or are they yet another techno-fad, more hype than impact? Over the coming years, all this will unfold with massive repercussions for our work, healthcare, and lives. (A guarantee: The Briefing is written by a live person, and always will be!)
When it comes to elections, it is becoming increasingly clear that the biggest new threat in 2024 comes from the impact of generative AI on the information ecosystem, including through deepfakes like the one starring “Kari Lake.” (The real Lake, meanwhile, sent a cease-and-desist letter to the website.) That risk is especially high when it comes to audio, which can be easier to manipulate than visual imagery — and harder to detect as fake.
Last year the Slovak presidential election may have been tipped by fake audio of a leading candidate that went viral days before the vote. In New Hampshire, bogus robocalls from “Joe Biden” urged voters to sit out the primary. In Chicago’s mayoral election, a fake tape purported to feature a candidate musing, “In my day, no one would bat an eye” if a police officer killed 17 or 18 people. The risk of doctored audio and video makes it harder to know what is real. Donald Trump has taken to decrying any video that makes him look bad as fake.
At the Brennan Center, we worry especially about how all this might affect the nuts and bolts of election administration. Recently we held a “tabletop exercise” with Arizona Secretary of State Adrian Fontes, one of the country’s most effective public servants, and other election officials in the state. It featured a similar fake video starring Fontes, created for educational and training purposes. The verisimilitude was so unnerving that the recording was quickly locked away.
Here’s a scenario we tested out: You’re a local election official. It’s a hectic Election Day and you get a call from the secretary of state. “There’s been a court order,” she says urgently. “You need to shut down an hour early.” When local workers receive a call like that, they should take a breath and call the secretary of state’s office back. You’ll find out quickly that the call was actually a deepfake. That’s the kind of simple process that could catch the fraud before it takes root.
Government can take other steps, too. We’ve laid out many of them in a series of essays with Georgetown’s Center for Security and Emerging Technology. Often, officials need to take steps that would already make sense to protect against cyberthreats and other challenges.
There is more that needs to be done. One good step is to label AI-generated content as watermarked, making clear that AI was used to create or alter an image. Meta (aka Facebook) proudly unveiled such a system to label all content that was created with AI tools from major vendors such as OpenAI, Google, and Adobe. My colleague Larry Norden, working with a software expert, showed how easy it is to remove the watermarks from these images and circumvent Meta’s labeling scheme. It took less than 30 seconds.
So government will need to step up. Sen. Amy Klobuchar (D) of Minnesota, a leader on election security, is working with Sen. Lisa Murkowski (R) of Alaska and others to craft bills requiring campaign ads that make substantial use of generative AI to be labeled. That requires finesse, since courts will be wary of First Amendment issues. But it can be done. Such reform can’t happen fast enough.
After all, as the deepfake Kari Lake put it so well, “By the time the November election rolls around, you’ll hardly be able to tell the difference between reality and artificial intelligence.” That’s . . . intelligent.
White House’s new rules for government AI deployments:
Vice President Kamala Harris Announces New AI Policy The White House Office of Management and Budget (OMB) has issued its first government-wide policy to mitigate risks and harness the benefits of Artificial Intelligence (AI), as announced by Vice President Kamala Harris. This policy is a response to President Biden’s Executive Order on AI, which called for action to strengthen AI safety and security, protect privacy, advance equity and civil rights, and promote innovation and competition.
Key Points of the New AI Policy:
This move is part of the Biden-Harris Administration’s efforts to ensure that America leads in responsible AI innovation1.
For more detailed information, you can visit the official White House briefing room statements1.
This is a long form text area designed for your content that you can fill up with as many words as your heart desires. You can write articles, long mission statements, company policies, executive profiles, company awards/distinctions, office locations, shareholder reports, whitepapers, media mentions and other pieces of content that don’t fit into a shorter, more succinct space.
Copyright © 2023 Erik Conn, LLC All Rights Reserved.
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.