Newsletter / Issue No. 5

Image by Chuck Carter/Midjourney.

Newsletter Archive

October 2023
navigation btn

Listen to Our Podcast

Dear Aventine Readers,

This month we take a look at how companies are trying to put new generative AI systems like ChatGPT to work. The short version is that there’s an enormous amount of enthusiasm, but using the new systems is in no way risk-free. As one of our sources said, describing middle management’s response to the new tools, “It could go wrong in so many new ways.”

Also in this issue: Floating wind farms, doing math with DNA strands and a roundup of notable Nobel laureates.

Happy Halloween!

Danielle Mattoon
Executive Director, Aventine

Subscribe

Subscribe to our newsletter and be kept up to date on upcoming Aventine projects

The Big Idea

Building a Product with Generative AI? It’s Complicated.

Ask any CEO what’s on their mind, and — perhaps after interest rates and inflation — they will invariably arrive at what they believe to be the most significant opportunity of 2023 and beyond: generative AI. 

It’s less than a year since OpenAI’s ChatGPT brought the technology crashing into the mainstream. Yet already a full 70 percent of CEOs told the professional services company KPMG that generative AI is high on their list of priorities. (Last year’s equivalent survey didn’t even specifically reference AI.)

The lure is understandable, particularly with the new breed of large language models (LLMs) that have captured the public imagination for the better part of a year. Tools built on these new models like ChatGPT, Google’s LaMDA and Meta’s Llama — trained on quantities of data that no human could absorb in a lifetime — provide human-level performance at many tasks that were previously impossible for computers. The result is the possibility that huge swaths of work can be automated, a prospect as attractive to cash-strapped start-up founders as it is to leaders of Fortune 500 companies.

Yet the reality of building tools and products with modern LLMs is more complicated than simply signing up for an OpenAI account. Experts who spoke with Aventine agreed that the technology has made it far simpler for companies lacking deep experience in AI to experiment with ways to put the powerful technology to use. But many also agreed that issues including data governance, reputational risk, skill gaps and regulatory requirements could present obstacles to effectively rolling out new, LLM-based systems at the pace many CEOs may want. 

“At the top, [leaders are] like, ‘Oh, actually, this is going to be transformational, we have to do something,’” said Alastair Moore, an associate professor at the University College London School of Management, who has a PhD in computer science and is also the co-founder an AI start-up called Ai8. “At the bottom [of the hierarchy], you're like, ‘I'm just gonna do my thing [and use it] to make my life easier.’ And in the middle [management], it's like,’It could go wrong in so many new ways.’”

So, exactly how is generative AI different, how is being used by businesses and what are the risks? Here is what we are seeing as companies adapt to it: 

It’s a Tool Almost Everyone Can Use

ChatGPT has significantly lowered the barrier to entry for companies that want to build products that make use of AI. It is “one of the most deployable machine learning [tools] that we've ever had,” said Michael Lee, a partner at the global management consultancy firm Baringa, who specializes in data and AI. Previous AI models usually had to be trained on a company’s own data in order to be useful — a process that required highly skilled engineers with access to well-labeled data. ChatGPT, however, is versatile enough that users don’t need to train their own models; its general-purpose abilities are good enough to undertake all sorts of tasks that formerly required a human touch, like writing marketing copy and suggesting prompts to call center workers. Etienne Pollard is the CEO of a U.K.-based start-up called Juno that has digitized the various legal processes involved in buying a home, from verification of financing to finalizing the purchase. He said that LLMs have become a primary focus of his product development team, and he agrees that generally LLMs perform well: “We haven’t yet needed to fine tune LLMs, because the performance and accuracy of unmodified state-of-the-art models has been excellent.”

The interfaces of these LLMs are also fairly easy and intuitive to understand, enabling even relatively unskilled individuals to access a technology that was previously off-limits to anyone who didn’t understand AI and coding. Because the systems respond to prompts written in plain English, it’s relatively straightforward to create complex systems that provide useful output without having to write the kind of code that in the past has been the preserve of AI PhDs. Instead, with modern LLMs, the skill is understanding what kinds of answers LLMs provide, how to write prompts that deliver the most useful responses, and how to make best use of the information that’s provided. As a result, said Carrie Hegan, another partner at Baringa who is working with companies that are looking to implement LLMs in the customer service functions, most organizations already have the required skills in terms of engineering, data science and so on to build tools using ChatGPT.

A low barrier for entry combined with the general-purpose abilities of these new LLMs is a potent combination. And interest has been piqued by stories from early adopters, such as the U.K. startup Octopus Energy, which was using LLMs to automate 34 percent of its customer service emails back in April — “the work of 250 people,” according to its CEO, Greg Jackson.  For certain tasks, “This type of technology now enables a very, very ambitious, [operating expense] reduction,” said Lee. “Very significant, in the kind of ‘half your business full time employment’ type [of thing], maybe more.” Pollard has certainly observed gains, noting that, where his company deployed LLMs alongside existing processes, the approach has “increased productivity by around 50 percent.”

There Are Significant Risks If Used for the Wrong Purpose

The question that many organizations are wrestling with, then, might not be how to use these sorts of LLMs but rather where to start. “Every organization that I'm speaking to has created their long list of ideas,” said Hegan. “That’s almost the easy part.” Now, she said, companies must determine the value LLMs can add, assess the feasibility of implementing them at scale and understand the risks they present. Determining the upside often boils down to the plain old business calculus of identifying the areas in which AI can provide the highest return on investment. Where things get trickier is in thinking through what could go wrong.

Perhaps the most obvious challenge for any company planning to use LLMs is the models' propensity to hallucinate (see more on this in our July newsletter) and create text that is riddled with inaccuracies — a so far unsolvable feature that makes it difficult to use these tools for mission-critical or certain customer-facing purposes. While it might be relatively risk free for an LLM to, say, churn out simple marketing emails, the stakes are much higher if companies want to use LLM-based systems to interact directly with clients for more nuanced purposes, such as customer service. In those applications, hallucinations in an LLM’s output could be particularly troublesome. “There is definitely a recognition of the risk this poses to an organization and its customer interactions,” said Hegan.

Many companies that are looking to use the technology are therefore starting with a human in the loop: A customer service call may be transcribed in real time and studied by an LLM that then provides prompts to a call handler; it is the human’s job to decide whether the software has provided correct and useful information. 

“People are being quite conservative,” added Hegan. “Many organizations are cutting their teeth on some of the internal use cases … just using it for things like accessing HR systems to get policy information for employees, things like that.” Efficiency gains here shouldn’t be underestimated, though: In the knowledge economy, helping employees navigate information faster is highly beneficial, even if it may be harder to quantify than the number of automated customer service emails.

Businesses Will Be Responsible for the Data They Share

In May, Samunsg banned its employees from using ChatGPT on company devices after it discovered that some workers had uploaded potentially sensitive information, including proprietary source code and meeting transcripts. (In the first instance, someone was using the service to optimize code; in the second an employee wanted ChatGPT to create a presentation based on a discussion that took place in a meeting.) By default, ChatGPT saves a user’s chat history and can use the content of conversations to train its models. Samsung feared that its own proprietary information could be held on servers used by OpenAI (and, by extension, Microsoft), which would be difficult if not impossible to have deleted, and could form part of the training corpus for the ChatGPT algorithms. Samsung isn’t alone in its concerns about where data might flow. “Lots of people won't want to send [their] data over [ChatGPT],” said Moore. 

At the end of August, Open AI introduced an enterprise license, which — for an undisclosed fee — guarantees users that “prompts or data are not used for training models.” Those with more technical teams were given the option to pay to use the API, which makes the same promise. But smaller companies and start-ups grappling with tight budgets that nevertheless hope to experiment will instead likely use the $20-per-month Plus offering, which makes no such claims. “So then you’ve got to decide … what you can send [to OpenAI] and what you can't send, and why,” said Moore.

Charlotte Bax, the CEO of Captur, a start-up using AI to provide image verification for applications such as package delivery, explained that her team members expense their Pro accounts to work with OpenAI’s technology. In response to employee concerns about the data privacy issues this raises, the company rolled out its own LLM usage policy, which states that employees must turn off the setting in ChatGPT that saves conversation history for longer than thirty days, and also stipulates that employees cannot upload any proprietary information to the platform. “We haven't had customers asking us about this specifically,” she said, adding, “I expect that that will happen.”

Along with the risk of exposing trade secrets, there’s also the potential that this new breed of LLM could run afoul of governmental data regulation. The EU’s General Data Protection Regulation, or GDPR, imposes strict rules on the storage and processing of personal data; if companies used such data in prompts so that it ended up on OpenAI’s servers, or if they were to fine-tune systems on their own large pools of data that unknowingly contained personal data, they could be liable to significant fines. Part of OpenAI’s terms of service states that companies using its system must “provide legally adequate privacy notices” to users “and obtain necessary consents” for the processing of personal data. That said, where exactly the liability lies if things go wrong is murky at best.

Bringing Products Based on New LLMs to Market Could Be Difficult

Incoming AI regulation in Europe could be tough on LLMs, potentially classifying them as “high risk” because their general-purpose nature means they could easily be used for applications that the underlying law classifies as high risk, like credit scoring or resume screening. That could make the process of lawfully deploying tools that use them onerous. It could also mean that the ease of building products could ultimately be a liability for selling them, as companies may need extremely experienced staff to make sure their LLM products are regulation-proof.  “Who can genuinely say, ‘I understand how this 1.7 trillion-node model is working’?” asked Lee. “[If] you need to give a seal of approval over these types of applications, then [companies] will need people who are quite deep in these types of [systems] and probably don't have the right staff.”

That could prove financially burdensome to companies that already have to carefully prioritize resources. “Start-ups and smaller companies are now going to have to take on so much additional cost,” said Bax. “Like, does this mean that I now have to hire a data security person in- house? How am I going to do that?”

That kind of expertise may be almost impossible to come by, said Moore, because while there are plenty of AI experts in the world, few have been building or working with the new LLMs for the simple reason that they’ve existed for such a short period of time. And fewer still have had to think about the safety of such algorithms in the wild. “There's no practical experience,” he said. 

Hagen also pointed out that there will be a lack of expertise in nontechnical teams that are adjacent to the development of tools built using LLMs — risk management, legal and so on. Those teams, she said, will find themselves playing catch-up as the technology takes hold.

Using the New LLMs is Likely Inevitable

Despite the known challenges, the excitement around building with LLMs seems palpable among most of the experts who spoke to Aventine. What comes across is that the sheer scale of opportunity excites people.

“If you look at where there's a reasonable amount of repetition in a process — an office-based role, where you're using information or trying to understand something, and then come up with an outcome — I would say almost all of those processes are up for automation,” said Lee. 

As with any new technology, there’s a tension for business leaders between jumping in too soon — before time-consuming wrinkles have been ironed out — and waiting too long, giving competitors an advantage.  But with generative AI, there might not be a choice.  “Because [LLMs are] working so well, even if you tell people not to use them, they will,” said Moore. “Whether people like it or not, it will happen.”

Listen To Our Podcast

Learn about the past, present and future of artificial intelligence on our latest podcast, Humans vs Machines with Gary Marcus.

Quantum Leaps

Advances That Matter

A giant climate experiment like no other. At the start of October, the European Union kicked off a potentially world-shaping effort to battle CO 2 emissions. But this isn’t nuclear fusion, electric airplanes or a plan to reflect the sun's warming rays. It’s a tax. The new Carbon Border Adjustment Mechanism, also known as CBAM, is an import tax on carbon-intensive products — cement, iron, steel, aluminium, fertilizers, electricity and hydrogen. There are already taxes on highly carbon-polluting goods that are manufactured and sold within EU borders; the CBAM applies the same approach to goods imported into the EU from nations that have no such carbon tax. The tax will kick in starting in 2026; until then importers will just have to declare the emissions associated with imports.  Skeptics may argue that this is a smart way for the EU to invigorate its own heavy industries, not least by ensuring that companies don't leave the EU to manufacture their products in a tax-free environment.  Regardless, it will undeniably force other nations to pay close attention to the bloc’s approach to carbon. 

Building a programmable DNA computer. DNA can be used to perform computation: The building-block molecules that make up strands of DNA act like the binary code of 1s and 0s. When strings of DNA bind and come apart they essentially perform operations on that binary code, and because DNA can hold huge amounts of data (as much as 1 billion gigabytes per cubic millimeter) it’s believed that it could be used to tackle particularly difficult calculations. But DNA computing is still very much a focus of research rather than a commercial activity; one big barrier is that most DNA computers can run only highly specific algorithms because it’s hard to encourage the strands to perform a programmed series of operations. That’s in stark contrast to regular digital computers, which can be used to solve many different problems. New research from  Shanghai Jiao Tong University in China, though, shows that it’s possible to use so-called DNA folding to encourage specific chains of operations, effectively making the DNA computers programmable. The researchers used the approach to calculate square roots of numbers, as well as to identify healthy and unhealthy genetic molecules. The approach won’t replace modern computation — not least because it’s slow, with calculations taking hours. But, as IEEE Spectrum reports, because of the ability to work with genetic code, DNA could yet be harnessed to perform highly complex biological operations, such as programming cells, in the future.

ChatGPT goes multimodal. OpenAI’s well-known app is no longer just a chatbot: Users on the  $20-per-month Plus subscription can now prompt the system with images or audio and hear responses through AI-generated voices if they choose to. While the company hasn’t released details about exactly how the technology works, these kinds of multimodal systems typically translate features found in images, audio or text into a kind of shared mathematical language, allowing them to convert concepts from one media to another. The text-to-audio feature, trained on recordings of voice actors, is convincing (if somewhat annoying), and the features will sit alongside another new feature, announced earlier in September, which will allow paying ChatGPT users to create images using DALL·E 3, its text-to-image AI system, by simply typing prompts. While many of these concepts existed before in some form, what OpenAI is doing here is notable because of the power of its large language model: It’s turning its powerful LLMs into a central system that over time will ingest all sorts of different inputs and provide results in many different formats. Indeed, this may all play into a rumor about a device that OpenAI is reportedly looking to build in collaboration with the former Apple designer Jony Ive. The company is clearly seeking to exploit its already significant first-mover advantage, at a pace that may leave many of its critics more than a little uneasy.

Innovation on the Ground

Technology’s Impact Around the Globe

1. Arizona, U.S. As temperatures soared in the Southwest over the summer, utility companies had a secret weapon at their disposal: smart thermostats. Canary Media reported that Arizona’s three largest utilities tapped more than 100,000 of the devices over July and August. Owners had agreed to have their devices adjusted remotely in return for getting the smart thermostat for free. Jointly, Arizona Public Service, Salt River Project and Tucson Electric Power were able to cut energy consumption by an average of 276 megawatts on hot summer days — the equivalent of tens of thousands of homes’ worth of power consumption — by remotely setting thermostat temperatures higher to ease electricity demand from air- conditioning units. And the utilities are experimenting with more complex approaches too, such as pre-cooling homes by jacking up air-conditioning early in the day to reduce demand in the afternoon and evening when temperatures are higher.

2. Myanmar. How much does it cost to build a military drone from scratch? For one rebel group in Myanmar — just one of hundreds battling the nation’s military, which took control of the country in 2021 — the answer is: about $5,000. Wired reports that by using 3D printers and components smuggled from Thailand, one young engineer, who cut his teeth on 3D-printed guns before finding they weren’t suitably robust, is building fixed-wing drones known as Liberators that can carry payloads of as much as three pounds. So far, the drones have been used to carry out attacks on military command centers and outposts, though many still fail soon after launch. If you think they sound familiar, they are  reminiscent of the improvised bombs being built by Ukrainaians that we described last month, and in fact early inspiration for some of the drones came from designs used in Ukraine. 

3. Ecuador. There are a lot of bugs in the Chocó rainforest of Ecuador. No, not those kinds of bugs; the forest is now home to a series of battery-powered microphones at more than 40 sites, which researchers are using to track animal life — a practice known as bioacoustics. According to research published in Nature Communications, recordings from the microphones — which run for two minutes out of every 15 in order to conserve energy —  have been analyzed by both human experts and artificial intelligence to identify the presence of creatures such as birds and insects. The benefit of using AI, which the scientists working on the project say has been highly reliable, is that researchers will be able to automate the tracking of bird populations as they grow and decline over weeks, months and years, with the benefit of avoiding human disturbance of the sites. That could be an important tool in securing and protecting ecosystems from potential extinctions in a region that’s facing rapid deforestation.

Long Reads

Magazine and Journal Articles Worth Your Time

The Global Race to Tap Potent Offshore Wind, from IEEE Spectrum
3,100 words, 12 minutes

Wind power is already America's largest source of renewable energy, accounting for just over 10 percent of the nation’s power. And the best place for wind turbines is offshore, where winds blow stronger and more consistently. But two thirds of America’s offshore wind energy potential is in deep-water areas where regular offshore wind turbines can’t be built, so the only way to capture that energy is to use floating wind turbines. This feature from IEEE Spectrum takes a close look at the challenges involved in building a floating structure that can support a turbine weighing 1,000 metric tons and reaching hundreds of feet into the air. One particular 12-rotor design is referred to by the engineers behind it as a “wind farm on a stick.”

Inside the Small World of Simulating Other Worlds, from Undark
3,600 words, 14 minutes

If the time ever comes to begin our migration to Mars, how will we cope with the new way of life? How will early settlers handle the isolation? What equipment will allow them to prosper?  How will they fare with minimal resources for extended periods of time? For more than two decades, so-called analog astronaut missions — basically simulated long-term space travel conducted here on Earth — have been attempting to answer such questions. Volunteers and researchers lock themselves in sealed habitats or venture to remote locations in order to experience something akin to life on Mars or aboard a spacecraft, living and working under conditions that many of us would find intolerable. This story from Undark explains how the practice is becoming formalized so that  disparate research efforts are connected, more effective and more relevant to our future travels.

How Oyster Farms Could Undo Decades of Environmental Destruction, from MIT Technology Review
4,400 words, 17 minutes

Farming oysters leads to some delicious results and can also be great for the planet. Through a process known as aquaculture, shellfish are farmed in large cages so that they can be positioned to help clean up waterways, removing huge quantities of pollutants such as nitrogen and phosphorus that otherwise swell algae populations, which decimate plant and animal life in brackish waters. (Plus, farming oysters  can also provide local communities with a new source of revenue.) And yet, as MIT Technology Review explains, many such efforts have struggled largely due to onerous regulations, NIMBYism and the cost of coastal land where processing facilities must be located. But, as the article also explains, there’s still a glimmer of hope for this low-tech solution to addressing pollution in ocean waters.

Innovators

Nobel Roundup

Every October, committees in Sweden and Norway name the winners of the Nobel Prizes, celebrating achievements in sciences, literature, economics and peace work. Here, we give you a quick rundown on the winners of the three scientific categories — medicine, physics and chemistry — and explain why they are important.

The medicine prize was awarded to Katalin Karikó and Drew Weissman for discoveries that led to the development of COVID-19 vaccines. The pair identified a chemical adjustment to so-called messenger RNA — the long-chain molecules that act as a template for the production of proteins — that allows it to be administered to humans without causing adverse immune responses. This specially designed mRNA can be used as a vaccine. When injected, the mRNA helps the body build a protein found on the surface of a certain virus, prompting the body to create antibodies for that virus; then, when the virus arrives in the body, the antibodies are able to fight it off. It’s this approach that was used by Moderna and BioNTech/Pfizer to create COVID-19 vaccines during the pandemic.

The physics prize was awarded to Pierre Agostini, Ferenc Krausz and Anne L’Huillier for their work on electrons. Specifically, they developed a means of creating incredibly short pulses of light that can be used to measure rapid processes, such as an electron’s movement. The light pulses — so short they’re measured in attoseconds, or 0.000000000000000001s of a second — are created by allowing different frequencies of light waves to interact in very particular ways. In a similar way to a short camera exposure capturing a very specific moment in time as an image, these short light pulses can be used to reveal information about the behavior of an atom, including where electrons are in its orbit.

The chemistry prize was awarded to Moungi Bawendi, Louis Brus and Alexei Ekimov for their discovery and development of quantum dots. These are tiny specks of semiconductor, like the material used in computer chips, that emit different colors of light dependingHer on their size. They take their name from the fact that quantum physics describes the relationship between the size of the dots — which measure a few millionths of a millimeter across — and the color of light they emit. Before the 1990s, creating pieces of semiconductor so small seemed impossible; now, the technology is widely used in TVs, and they’re also being used to develop new technologies, such as thinner solar cells and encrypted quantum communication devices.

logo

aventine

About UsPodcast

contact

380 Lafayette St.
New York, NY 10003
info@aventine.org

follow

sign up for updates

If you would like to subscribe to our newsletter and be kept up to date on upcoming Aventine projects, please enter your email below.

© Aventine 2021
Privacy Policy.