Temporarily blocking its employees from accessing ChatGPT over security and privacy concerns, Microsoft responds
Blocked from using OpenAI’s ChatGPT by mistake?
When you purchase through links on our site, we may earn an affiliate commission.Here’s how it works.
What you need to know
PerMicrosoft’s Work Index Report, 70% of the employees who participated in the survey cited that they were ready to adopt the technology and incorporate it into their workflow to handle mundane tasks.
It’s obvious that Microsoft employees use AI to handle some tasks within the organization, especially after extending its partnership with OpenAI bymaking a multi-billion dollar investment. However, anemerging report by CNBCcites that Microsoft employees were briefly restricted from accessingChatGPTon Thursday.
According to people familiar with the issue, Microsoft decided to briefly restrict access to the AI-powered tool due to “security and data concerns.” Microsoft issued the following statement pertaining to the issue:
“While it is true that Microsoft has invested in OpenAI, and that ChatGPT has built-in safeguards to prevent improper use, the website is nevertheless a third-party external service. That means you must exercise caution using it due to risks of privacy and security. This goes for any other external AI services, such as Midjourney or Replika, as well.”
While speaking toCNBC, Microsoft indicated that the restricted ChatGPT access was a mistake, which occurred while the companyran an array of tests on large language models.
We were testing endpoint control systems for LLMs and inadvertently turned them on for all employees. We restored service shortly after we identified our error. As we have said previously, we encourage employees and customers to use services like Bing Chat Enterprise and ChatGPT Enterprise that come with greater levels of privacy and security protections.
It’s evident that there’s a lot of concern revolving around the technology’s safety and privacy.President Biden issued an Executive Orderaddressing most of these concerns, but there’s still an urgent need for guardrails and elaborate measures that will help preventgenerative AIfrom spiraling out of control.
This news comes afterOpenAI confirmed a ChatGPT outage caused by a DDoS attack. The outage prevented users from leveraging the chatbot’s capabilities fully, furnishing them with error messages instead.
Get the Windows Central Newsletter
All the latest news, reviews, and guides for Windows and Xbox diehards.
Microsoft is on top of the AI security situation
In June, a cybersecurity firm issued a report citing thatover 100,000 ChatGPT credentials had been traded away in dark web marketplacesover the last 12 months. The security firm further indicated that the attackers leveraged info-stealing malware to gain access to these credentials and recommended that users consider changing their passwords regularly to keep hackers at bay.
Another reportcited that hackers were warming up and using sophisticated techniques, including generative AI, to deploy malicious attacks on unsuspecting users. Keeping this in mind, it’s not entirely wrong for Microsoft to restrict the use of AI-powered tools, especially over security and privacy concerns.
What are your thoughts on AI safety and privacy? Let us know in the comments.
Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. You’ll also catch him occasionally contributing at iMore about Apple and AI. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.