Survey: Site-Specific Training Content Enhances Front-Line Worker Performance

Manufacturing and warehouse companies are focusing on several key strategies to enhance safety, productivity, and workforce development, according to a recent report from Intertek...
HomeNewsChat BotNew York City's Microsoft-Backed Chatbot Encourages Business Owners to Violate Laws

New York City’s Microsoft-Backed Chatbot Encourages Business Owners to Violate Laws

A generative AI (GenAI) chatbot developed by New York City is facing criticism after providing advice that could lead small business owners to break the law. Powered by Microsoft’s Azure AI services, the “MyCity” chatbot has been accused of misstating local policies, with experts highlighting its potentially incomplete and dangerously inaccurate information.

According to The Markup, the chatbot offered advice on housing policy, worker rights, and rules for entrepreneurs, often presenting itself as authoritative. For example, it inaccurately advised that landlords are not required to accept tenants on rental assistance or Section 8 vouchers, despite laws in New York City prohibiting income discrimination in housing.

Similarly, the bot incorrectly stated that businesses could operate cashless, disregarding a local mandate requiring stores to accept cash since 2020. Despite these inaccuracies, the chatbot remains available online, prompting concerns about the dissemination of false guidance.

While New York City has added disclaimers noting that the chatbot’s responses are not legal advice, critics argue that the city has failed to implement adequate safeguards. Julia Stoyanovich, a Computer Science Professor at New York University, criticized the city’s approach as reckless and irresponsible, highlighting the lack of oversight and accountability.

In response to the criticism, Mayor Eric Adams defended the decision to continue using the AI system, asserting that it represents a standard practice in technology deployment. However, critics contend that the city’s approach exposes users to potential harm and legal risks.

Microsoft, the provider of the AI services, has committed to collaborating with New York City to improve the accuracy of the chatbot’s responses. Nevertheless, the incident underscores the challenges and legal implications associated with deploying AI technologies without sufficient oversight and accountability measures.

The case serves as a cautionary tale for organizations embracing AI, highlighting the importance of implementing robust safeguards to mitigate the risks of misinformation and legal liability.