5 Q’s for Bindu Reddy, Co-founder and CEO of Abacus.AI – Center for Data Innovation

The Center for Data Innovation spoke to Bindu Reddy, co-founder and CEO of Abacus.AI, a startup in San Francisco that builds plug-and-play AI tools to help companies effortlessly deploy AI models. Reddy discussed how the company’s deep learning platform helps organizations build AI models in less than a few hours and automatically retrains them on real-time data.

Hodan Omaar: How does Abacus.AI automate the development and scaling of AI systems?

Bindu Reddy: In the conventional process to develop an AI system, you have to train and test the system in a development environment using test data and then, when it’s ready, you have to rebuild it in a production environment using production data. That’s a big pain point for many businesses because it takes a lot of time and expertise to rebuild the model in production with different tools, different data, and new data pipelines. They also have to think about monitoring the model and managing it throughout its lifecycle.

Abacus.AI makes the whole process much easier. We have developed plug-and-play AI tools that customers can use to build deep-learning models for various use cases from fraud detection to video recommendations to understanding customer churn. Alternatively, for customers who have already built their own models in-house, our platform’s infrastructure can help them move them into a production environment in just a few clicks. Our tools can then monitor the model for drift, which is when a system’s predictive performance degrades from new input values, and automatically re-train it.

Omaar: Some governments, such as the Singaporean government, have made plug-and-play AI resources available to help small- and medium-sized companies adopt AI. Do you think something similar could work in other countries?

Reddy: For sure! From what I can tell, Singapore’s plug and play tools have even less abstraction than what we offer at Abacus.AI, meaning you need to put in a little more work to pull together an AI model than if you were to use our platform. But if state or local governments in other countries offered such tools, I think it could help encourage a lot of AI experimentation to solve challenging problems, particularly those that serve the public good. The California wildfires are a good example. Predicting where the fires will spread to next and informing firefighters on where best to implement mitigating measures would be a great use case.

Omaar: Beyond commercial work, Abacus.AI is also contributing a lot of foundational AI research to the broader community. As a young startup, how do you balance investing in open AI research while maximizing your competitive advantage?

Reddy: AI is a really fast-moving field but we’re still in the very early days of this technology. I like to say that if AI were a person, it might be a toddler at best. There’s a lot to be invented and discovered. I think it does us all a disservice to not strive to be at the edge of this technology,  trying our best to extract intelligence from data. At Abacus.AI, we feel strongly that if we conduct foundational research to solve the difficult problems the community faces, we will not only be able to embed those findings into our products and services and get a commercial leg-up, we’ll also be able to give back to the AI community.

For instance, one of our recent developments has allowed us to offer the only tool that lets companies very quickly deploy deep-learning systems that utilize real-time data. Rather than retraining at fixed intervals, say every three hours, systems using our technology can retrain every time the data is updated, which could be every few seconds for some types of systems, such as those that use social media data.

Omaar: Speaking of social media, policymakers are increasingly concerned about the threat of deepfakes and disinformation from AI systems that generate or curate content. To what extent do you think your platform can help address these risks?

Reddy: Fundamentally, I think any technology can be used for harm. But having said that, I genuinely believe that as AI continues to evolve, it will curtail misinformation perhaps even better than it generates it. The more intelligent AI systems don’t just notice characteristics about online content, such as how much engagement they get. More advanced tools can grasp the nature of the content and one day, will likely be able to judge their propensity to misinform.

Large platforms that distribute content don’t really have any choice but to employ AI systems to quell misinformation. Since we play in the recommender space, I am hopeful that our tools will be able to help address some of these problems. Overall I’m optimistic in this area.

Omaar: What is Abacus.AI’s goal over the next few years?

Reddy: Our vision for Abacus.AI has always been to make it dead easy for companies of all sizes to build AI systems at scale and create an inflection point for the level of AI adoption among companies. Over the next few years, we hope to enable broad access to real-time deep learning systems. We’re particularly looking at increasing adoption in those sectors and industries that are lagging in AI. 

In part, they lag because the techniques and expertise it takes to employ AI are so great. People who have AI skills and knowledge go to work at AI-focused companies rather than companies like Neiman Marcus or The Gap whose sole focus is not AI, but for whom AI would be very beneficial. I think the solution is to have deep learning tools that these sorts of companies can easily apply to solve problems. Abacus.AI can help do that.