The Xero Responsible Data Use Advisory Council recently held its seventh meeting, with a focus on the extraordinary developments in the field of generative AI, and the myriad potential applications for small business.
The council is made up of myself, Laura Jackson of Popcorn Shed (business owner), Maribel Lopez of Lopez Research (technology analyst), Wyndi and Eli Tagi of WE Mana (advisors), Aaron Wittman of XBert (app developer), Anna Johnston of Salinger Privacy (privacy compliance specialist), and Felicity Pereyra of Elevate Strategies (data analytics strategist).
In a discussion led by Xero’s GM of AI Products, Soon-Ee Cheah, we explored both the benefits and potential pitfalls of tools like ChatGPT.
Soon-Ee kicked off by asking us to think about what these technologies mean and their limits. When using generative AI in business, we should have a sense of how confident we need to be in its outputs. To take an extreme example, a self-driving car that’s right 99{1652eb1ffa4184925f6a63a9c04ea6b421acb7a78117241e7d4325cdca8339fa} of the time is an unacceptable risk, because the 1{1652eb1ffa4184925f6a63a9c04ea6b421acb7a78117241e7d4325cdca8339fa} error rate means it will eventually drive off a cliff.
On the other hand, a lower rate of accuracy may be acceptable if we want to use an AI to help write an internal business report. However, using those same outputs in a press release could be disastrous if it misleads or breaches third party copyright. As for using these technologies in tax and financial advice: if that advice is bad, the consequences could be severe and the accountability measures tricky. The key is to assess the downside of ‘getting it wrong’ in the given context.
Limitations of generative AI
The discussion then turned philosophical. How well placed is society to adapt to such a major technological change, and what underlying assumptions will we have to challenge? Soon-Ee suggested that humans have historically taken heuristic approaches (aka mental shortcuts) to assessing truth, based on the asymmetry of available information. For instance, if 99 recipes for French salad dressing suggest using olive oil, and one suggests turnip juice, most of us (the AI included) would disregard the purple variant. But in a world of disinformation – where, for example, a vocal minority wrongly believes that an election was stolen – this heuristic approach may not serve us well. Generative AI is only as reliable as the data it feeds on.
At this point, Maribel pointed out that generative AI proposes answers without allowing us to interrogate the underlying data. In other words, it asks us to outsource our own critical faculties. Soon-Ee agreed, and highlighted research underway that aims to show the fact base of an AI output back to the user (suggesting that it would have perhaps been better to wait until these technologies are progressed before unleashing generative AI on the world).
Generative AI can present legal and privacy implications
The conversation turned to some of the legal challenges of generative AI, specifically, the IP and privacy implications. Anna questioned the assumption (which arguably underpins generative AI) that everything on the internet is ‘in the commons’. For instance, putting personal information or copyrighted material online should not mean it is open season for training AI models. However, she also suggested that regulators will have a difficult time keeping up with these developments and protecting individual and proprietary rights.
We are still in the Wild West, with many legal and regulatory implications yet to be worked through. But with lawsuits underway alleging wide-spread copyright infringements, and privacy regulators finding breaches of law in the way in which personal information has been scraped from internet sites, business owners do need to be careful about assuming that generative AI outputs (including code) are safe to use.
While mindful of the risks, the group agreed that there are also major upsides with tools like ChatGPT. We are only scratching the surface of the benefits in terms of efficiency, customer experience, and better decision making. Council member Aaron, whose company Xbert has long used AI to help accounting professionals work more productively, is now in the early stages of using generative AI to unlock the benefits for its customers.
In using generative AI, we discussed the need to be mindful about unwittingly giving away valuable data and IP. A point that I have felt compelled to make before is that apparently “free” versions of generative AI come at the cost of handing over data, and are unlikely to remain free for long. The old adage of “If you’re not paying for the product, you are the product” rings true. We all need to be savvy consumers and take a longer-term commercial view before embedding these products in our business models.
Soon-Ee finished on a human note, with some reassurance that people are not about to be replaced en masse by these technologies. He pointed out that, despite digital clocks telling perfect time, old-fashioned mechanical watches still sell by the millions. They come with an economic value that is decoupled from efficiency. In an AI-driven future, will businesses stand out by offering a human element that cannot be simulated by a machine? On that thoughtful note, a very interesting discussion came to an end.