Generative AI, tech revolutions and semantic search: Insights from Rakuten Technology Conference

What do deep learning, semantic search and Japan’s next big breakthrough have in common? Artificial intelligence, of course – the topic of the day at Rakuten Technology Conference 2023.

Erwan Menard is the Director of Product Management, GTM (Go to Market) Google Cloud AI business. He joined Rakuten’s resident AI expert and Chief Data Officer Ting Cai to talk about the challenges and opportunities of the AI boom.

All eyes are on generative AI

While innovation in “traditional” techniques like predictive AI continue to advance, the tech industry has largely turned its attention towards generative AI (GenAI) and the potential of large language models (LLMs).

“There is a total pivot in thinking, in the way you solve the problem. Instead of throwing curated information to train the model at a single task, you send a large corpus of information to the model to train it to do many different things, and then you consider specializing the model. I think that’s really the paradigm shift,” commented Menard.

The hallucination problem

Paradigm shifts like this come with their own set of challenges. For example, researchers across the industry – including at Rakuten and Google – are hard at work trying to solve a major problem that is limiting adoption: hallucinations. LLMs fundamentally work by predicting the missing word in a sentence, and sometimes, in an attempt to “fill in the blanks” and make an answer coherent, they’ll make up data. Often these hallucinations are egregious and are easily spotted, but sometimes, they’re subtle and easily believable – and that’s where the danger comes in.

“A side effect of that is that these models are incredibly assertive,” Menard explained. “So when you have a dialogue with the model, the answers are in a tone that’s extremely convincing. It doesn’t mean the model knows; it means the model is proposing a way to fill your sentence.”

There has been no shortage of media coverage of AI chatbots providing confidently wrong answers. These hallucinations are one major reason many businesses remain hesitant to entrust the reputation of their brands to LLMs.

“That’s not something we should ignore, or say does not exist, because it does. It’s actually part of the science underneath. Instead of ignoring it, the question is, how do we handle it?”

One technique, called grounding or retrieval augmented generation (RAG), improves the accuracy of LLMs by plugging in real data for the model to reference when crafting answers.

“You ask a question to the model and the model answers, and in addition provides you links to information that supports its answers,” Menard explained. “That way, you work around the hallucination effect.”

Be safe, then be bold

“We need to do this responsibly,” Menard told the Tokyo audience. “We need to be mindful of privacy and we need to be mindful of these new frontiers that are being pushed by AI.”

Rakuten Chief Data Officer Ting Cai  agreed with Menard on the vital importance of understanding GenAI's risks before moving ahead.
"We have to be safe before we can go bold."
Rakuten Chief Data Officer Ting Cai agreed with Menard on the vital importance of understanding GenAI’s risks before moving ahead.
“We have to be safe before we can go bold.”

One area where this is particularly clear is in the field of medicine. Menard’s colleagues at Google are building an LLM called Med-PaLM 2 to help medical professionals make judgments more efficiently.

“One thing that is important when you build a tool that effectively becomes extremely knowledgeable about an important topic, which is health, is to think about the way you position the product,” he explained. “It’s really about making humans more capable. Effectively the way we position that model, it cannot be used by legal terms to answer questions from patients.”

The tool is designed to be no more than an assistant for professionals.

“It always has to be used as a companion to a physician or nurse – somebody who can exercise judgment on the answers of the model. That’s not a technology decision, that’s an ethical posture.”

Part of developing responsible AI is ensuring a comprehensive understanding of the training data.

“What data are being used to train the model? Am I confident that it’s not infringing any copyrights? Do I have safety filters on the prompts and on the outputs to avoid bad language and bad answers?” Menard said. “But it’s also a matter of culture and ethics and standards, and I think it’s easy for us engineers to focus on the former. But this is a massive revolution. This is going to change all business processes.”

Bringing useful AI to the Rakuten Ecosystem

Rakuten has been busy with its own efforts on the AI front. Chief Data Officer Ting Cai’s global data and AI team has spent the last year building a sophisticated deep learning foundation, which collects, analyzes and distills patterns from billions of data points across the Rakuten Ecosystem to provide a better understanding of the company’s customers, and what they need and want.

The first application of this platform was implementing semantic search on the company’s apparel e-commerce platform, Rakuten Fashion. While traditional search tools identify keywords in queries and then match them to similar keywords found in results, semantic search understands intent. For example, if a customer is planning on going to a fireworks festival, they can search for “Clothes to wear to a summer fireworks festival in Tokyo” and get plenty of results. In the past, with such a long query they wouldn’t get any hits.

“Our semantic search technology leverages years of e-commerce transactions to predict the most efficient embedding to not just match based on the exact keywords but on the underlying meaning,” explained Cai. “Not just what you say, but what you mean.”

RAG can use semantic search to retrieve product data, grounding the LLM to generate content based on the actual data held.

“The model is learning your context as you refine your discovery. The context is not lost but you have the accuracy of search,” Menard agreed. “An image, text, or piece of audio can be seen as a vector. You use a large database of those embeddings and you do matching to find the closest. That is totally changing recommendations.”

Smaller businesses are in danger of missing the AI train, Cai warned.
Smaller businesses are in danger of missing the AI train, Cai warned.

AI must be for everyone

Although we’re just at the beginning of the generative AI revolution, Cai stressed that now is the time for all businesses, both large and small, to start planning their AI strategy.

Cai argued that companies must embrace AI as part of their everyday work. This could be as simple as using GenAI to create visuals or text to boost a business’s online presence, or using conversational AI to power better, faster, smarter customer service – tools so friendly that the users don’t even think of them as AI.

“The risk of not learning the technology is also huge. Imagine that everyone else has GenAI as their secret tool in their pocket,” he posed. “We want to make AI technology more accessible to everyone – especially for small businesses.”

Menard has high hopes for Rakuten’s potential, given its many touchpoints with the world of small businesses.

“I think the promise of putting this technology in everybody’s hands can be accelerated by the likes of Rakuten. I think it’s our collective responsibility as technology people to abstract these technologies so that everyone can use them,” he said. “For small businesses, it’s about giving them creativity. In larger enterprises, it’s about productivity and assisting the employees, and I think that’s the way adoption will then scale.”

Priming Japan for its next technologic breakthrough

Menard recalled Japan’s manufacturing prowess at the beginning of the 3G age in the late ’90s.

“The world was coming here to imagine the customer experiences that would be possible,” he reminisced. “There was this hardware leadership that was amazing.”

Like the internet and smartphones before it, the rise of AI could represent a revolutionary opportunity for Japan.

“It’s important to look at that as a pivotal moment that sets off some form of a reset from a talent and skill point of view,” he said. “Frankly, if I was a young engineer or entrepreneur in Japan today, I’d be super excited.”

AI dominated discussion at Rakuten Technology Conference 2023.
AI dominated discussion at Rakuten Technology Conference 2023.

Menard stressed to the engineer-leaning audience the importance of getting familiar with the latest tools, and exploring a field for which there aren’t even any university courses. “Go touch, demo, play – that’s how you’re going to learn,” he said. “Engage, download the tools, play with it, talk about it with your friends.”

Menard ended the session by underscoring just how important this period of time is for engineers and society.

“It’s also an opportunity to think about it as a human being – as a citizen trying to do the right thing for society,” he ventured. “I think we’re privileged to be at this very moment because not only do we have amazing technology to play with, but we have most likely a pivotal moment for society… It’s an amazing moment. We’re really lucky.”

Tags
Show More
Back to top button