Imagine a world where one could simply type in the kind of app they want to build and it is created swiftly, utilising artificial intelligence. This was the promise made by builder.AI to its investors and users.

The digital assistant was named Natasha. As a result, this AI-driven startup gained recognition in the AI startup community and reportedly received backing from Microsoft and a Qatari sovereign fund. However, it was eventually revealed that there was no AI developing the apps. Instead, a team of 700 engineers in India was manually coding the applications. Builder.AI misled its investors into believing they were investing in advanced AI technology.

While the Wall Street Journal reported as early as 2019 that the platform mostly depended on human engineers rather than AI, the company’s downfall occurred in May 2025. This was when the new CEO discovered significant financial discrepancies. The founder had claimed sales of $220 million, but an audit found that the actual figure was just $50 million. US prosecutors have initiated an investigation and requested access to the company’s records and data.

A thorough investigation will be required to uncover the full details of what transpired. Questions, no doubt, will arise about why investors did not conduct due diligence at the time of their investment, whether the directors noticed any irregularities, and if there were any whistleblowing complaints from employees or vendors. Additionally, the auditors’ role may also be examined, particularly regarding salary payments to a large workforce in India over such a large period. These questions highlight gaps in corporate governance that need to be addressed.

Need For AI In Investigations

The intersection of fraud investigations and artificial intelligence underscores both the promise and perils of advancing technology. AI, hailed as a transformative tool for detecting anomalies and analysing vast data sets, has reshaped the landscape of financial scrutiny.

In theory, its predictive algorithms and machine-learning capabilities provide an unparalleled advantage for identifying fraudulent patterns, flagging irregularities, and streamlining compliance processes. However, the builder.AI case serves as a cautionary tale, revealing how AI itself can become a smokescreen for deception.

Investigations into fraud involving AI require a multifaceted approach. While the facts in the builder.AI case are fairly straightforward, authorities must now learn how to navigate complex layers of technology to discern genuine innovation from fabricated claims. They often rely on forensic audits that integrate AI-powered tools to trace financial trails, uncover discrepancies and analyse employee communications for signs of collusion or negligence.

AI-driven fraud detection systems, such as natural language processing for email analysis or image recognition for falsified documents, can accelerate the investigative process. Yet, the efficacy of these systems depends heavily on their design, transparency and governance — all elements that were grossly misrepresented in builder.AI’s narrative.

On the other hand, AI finds itself increasingly entangled in ethical dilemmas during fraud investigations. Questions arise about the security of sensitive data entrusted to AI systems; the bias embedded within algorithms and the accountability of companies deploying AI for financial oversight.

In the absence of regulatory frameworks, the misuse of AI can lead to both inadvertent oversights and deliberate abuse, compounding the complexities of corporate fraud.

Leveraging AI Toolbox: The Path Forward

As global markets become increasingly reliant on artificial intelligence, regulators and organisations must adopt proactive measures to mitigate risks. Enhanced auditing standards, independent validations of AI capabilities and real-time monitoring technologies are just a few of the safeguards required.

Moreover, fostering a culture of transparency and ethical AI development remains paramount. Without these, the potential for AI to be exploited — whether as a tool for fraud or as a misleading narrative — will persist.

Ultimately, the builder.AI investigation sheds light on the dual role AI can play in shaping the future of fraud detection and prevention. While it holds the promise of revolutionising corporate accountability, it also demands vigilance, scrutiny, and ethical stewardship to ensure its own integrity as a cornerstone of modern technology.

Sara Sundaram and Sahil Kanuga are partners at Cyril Amarchand Mangaldas.

Disclaimer: The views expressed here are those of the authors and do not necessarily represent the views of NDTV Profit or its editorial team. 

. Read more on Opinion by NDTV Profit.