Selipsky at AWS re:Invent on Securing Data in the GenAI World

Cloud services provider continues to work its way into the AI conversation as policymakers increase the pressure to secure data that might be used with the technology.

Joao-Pierre S. Ruth, Senior Editor

November 30, 2023

3 Min Read
Adam Selipsky delivers his keynote at AWS re:Invent 2023.
Adam Selipsky delivers his keynote at AWS re:Invent 2023.Amazon photo

AWS CEO Adam Selipsky delivered a different kind of keynote at this year’s re:Invent conference held this week in Las Vegas and streamed online with the topic of AI seasoning much of his message this time around.

Rather than aim to go head-to-head with the likes of ChatGPT, as some other tech companies have explored, AWS has been positioning its cloud resources for other organizations developing generative AI applications. Though AWS re:Invent typically showcases products and services the company intends to unleash, it also offered some insight into how a major tech company that does not have a competing generative AI weaves itself into the conversation.

Whose Data Is It, When?

Selipsky discussed the importance of securing data that enterprises as GenAI proliferates, which brings up questions about how data might be used and who could gain access to it as AI models get trained up. For his part, Selipsky said customer data would remain protected in the Amazon Bedrock service and not be used to train or improve base models. Bedrock offers access to foundation models from AI companies and developer tools for building GenAI applications.

“When you tune a model, we make a private copy of that model,” he said. “We put it in a secure container, and it goes nowhere. Your data is never exposed to the public internet.”

Related:AWS Angles Cloud Resources For Generative AI-Dominated World

Such data is encrypted in transit, Selipsky said, to give the users control over access. He touted the service could help users meet regulatory standards such as HIPAA and GDPR, as data usage and access continue to face increasing scrutiny around the world. Industry talk of security has been further elevated by the recent international discussion of guidelines for secure AI systems developments.

The risks that AI might present were woven into Selipsky’s perspective on possible benefits of the technology. “We need generative AI to be deployed in a safe, trustworthy, and responsible fashion,” he said. “The capabilities that make GenAI such a promising tool for innovation also do increase the potential for misuse. We’ve got to find ways to unlock generative AI’s full potential while mitigating the risks.”

'Unprecedented Collaboration' Needed

He went on to say the challenge of addressing such pitfalls would “require unprecedented collaboration” with multiple stakeholders weighing in “across technology companies, policymakers, community groups, scientific communities, and academics.”

Such a movement is already underway, with Selipsky mentioned Amazon’s participation, alongside other tech companies, in the discussion of safeguards for AI -- at the behest of President Biden’s administration.

Related:ChatGPT: Benefits, Drawbacks, and Ethical Considerations for IT Organizations

Whether it is the announcement of executive orders on AI from the White House or policies that take shape abroad, major tech players such as AWS are finding themselves in ongoing discussions with politics and regulators watching the technology evolve and spread. “Just this month, I joined UK Prime Minister Sunak for his AI Safety Summit to discuss new approaches,” Selipsky said, “including the formation of their new AI Safety Institute.”

While there are obvious dark specters such as hackers who might seek to abuse AI technology, Selipsky did discuss a few nuanced concerns that enterprises might face and want to protect against. “For example, a bank could configure an online assistant to refrain from providing investment advice,” he said. “Or to prevent inappropriate content, an e-commerce site could ensure that its online assistant doesn’t use hate speech or insults. Or a utility company could remove personally identifiable information or APIs from a customer service call summary.”

As the world celebrates, or cautiously watches, the one-year anniversary of the sea change brought on by ChatGPT, AWS seems to keep looking for ways to navigate this expanding space with an eye on security and accessibility of the technology. “We believe that generative AI should help everyone at work seamlessly with helpful, relevant assistance,” Selipsky said, “whether or not you know the first thing about foundation models, RAG (retrieval-augmented generation), or any of the rest.”

Related:2023 Cyber Risk and Resiliency Report: How CIOs Are Dueling Disaster in 2023

About the Author(s)

Joao-Pierre S. Ruth

Senior Editor

Joao-Pierre S. Ruth covers tech policy, including ethics, privacy, legislation, and risk; fintech; code strategy; and cloud & edge computing for InformationWeek. He has been a journalist for more than 25 years, reporting on business and technology first in New Jersey, then covering the New York tech startup community, and later as a freelancer for such outlets as TheStreet, Investopedia, and Street Fight. Follow him on Twitter: @jpruth.


Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights