Stardog CEO on Fueling NASA’s AI Mission and Enabling Data Relationships
Sep 27, 2023
The Washington, D.C.-based AI startup that helped NASA pilot its data analytics needs just released a new AI platform that lets enterprise clients interact directly with company data.
Generative artificial intelligence (GenAI) has quickly become the technology sector’s wild west, with startups vying for attention as enterprise use advances. Stardog, an AI firm specializing in data analytics software, was founded in 2005 — well before ChatGPT 2022 launch created a firestorm of hype around GenAI.
This week, Stardog announced the availability of its new Voicebox large language model (LLM) platform, which the company says will allow enterprise users with little AI experience to immediately access and interact with critical business data. The company has so far raised $25 million, including a recent investment from Accenture.
Voicebox is a self-service analytics platform that workers can use to directly question any and every enterprise data source using ordinary language, the company said. Many enterprises have limited enterprise ChatGPT use because of security and data privacy concerns. Stardog says its product avoids those concerns by using only corporate data sources.
InformationWeek chatted with Stardog CEO and founder Kendall Clark about the company’s history working with NASA, the future of GenAI, how CIOs and CTOs should tackle AI implementation, and how regulation will impact AI efforts.
Let’s explore your company’s history with NASA. How did that relationship come about?
The relationship started in the early history of the business. My two co-founders and I met at one of the several artificial intelligence labs at the University of Maryland. I met a fellow who turned out to be one of the CIOs at NASA headquarters. The genesis of our relationship was just a personal relationship. I found myself at this lab as a researcher and one of the guests visiting was trying to teach us about NASA while we taught him about AI. The relationship between NASA and Stardog has endured and we’ve been helping very actively both with our platform, as well as a professional services team who has worked for the last several years on integrating the rocket science supply chain — the crew capsule, for instance, is designed, built, tested, and deployed by a globally distributed team of companies, engineers, contractors, subcontractors, civil servants, and so on. And that systems diversity and heterogeneity leads to data silos and that leads to the inability to connect data across organizational and between organizational boundaries. And Stardog has been plugged in as a solution to that problem for a couple of years now.
AI really seems to have taken over the whole industry and has really gone beyond hype with everyone racing to adopt some form of GenAI in their business. What would your advice be for CIOs and CTOs of businesses just getting into GenAI?
I would say they should start small and with a very specific problem caused by data accessibility. And these problems show up in a lot of places … organizations that aren’t software companies or that aren’t technology companies need, rather than to acquire the capability to do some abstract problem solving, to focus on what they do best. And most of these companies should apply AI rather than create AI. And that just has a lot of good downstream effects to focus on what’s realistic, what kinds of skills you really need, what kind of vendor partners you may need, what kind of changes you need to make in your organization — not just from a quality management point of view, but from a data strategy point of view. The solutions need to be rooted in practical, real-world problems.
Let’s talk a little bit about AI regulation and how that might impact the industry. How do you prepare for an unknown that will likely have a major impact on how you operate?
At the beginning of the web, I think the smartest thing the United States did was they regulated very late in the game — they let it be built and develop on its own. The states and the federal government tried to take a hands-off approach for a decade or more. And I think that’s the right way to regulate. I think regulating things before we understand them leads to bad regulation. On the flip side, a lot of big incumbent tech companies can win because they have a strategic advantage based on who they are. And they want to freeze the market with regulation. So, I think there’s a lot of bad faith in calls for regulation and then a lot of clumsiness on the part of regulators … and now they want to regulate AI. I’m just reflecting my position as a disrupter. I don’t want to see the market frozen. Consumers are worried about Big Tech overall, and that’s the problem we should address before we worry about AI. AI will solve itself later. Let’s worry about data privacy, which is a problem that we haven’t addressed.