Hackster is hosting Hackster Holidays, Ep. 7: Livestream & Giveaway Drawing. Watch previous episodes or stream live on Friday!Stream Hackster Holidays, Ep. 7 on Friday!

Apple's ReALM Aims to Deliver Context-Aware LLM-Powered On-Device Edge AI Assistants

Reference Resolution As Language Modeling delivers better responses than OpenAI's GPT-4, it's claimed, while running on device.

Gareth Halfacree
9 months ago β€’ Machine Learning & AI

A team of researchers at Apple have published a paper unveiling ReALM, a reference resolution system powered by large language model (LLM) technology β€” and which is claimed to be able to run on device while outperforming OpenAI's popular GPT-3.5 and GPT-4 LLMs.

"Reference resolution is an important problem, one that is essential to understand and successfully handle context of different kinds," the researchers write in the abstract to their paper. "This context includes both previous turns and context that pertains to non-conversational entities, such as entities on the user's screen or those running in the background. While LLMs have been shown to be extremely powerful for a variety of tasks, their use in reference resolution, particularly for non-conversational entities, remains underutilized."

"This paper demonstrates how LLMs can be used to create an extremely effective system to resolve references of various types," the researchers continue, "by showing how reference resolution can be converted into a language modeling problem, despite involving forms of entities like those on screen that are not traditionally conducive to being reduced to a text-only modality."

The problem the team set out to solve: the often-ambiguous nature of speech, in which people drop in references to "they" or "that" which aren't immediately clear to a machine learning system β€” particularly when the user is making reference to something they can see on their smartphone's screen. The Reference Resolution As Language Modeling (ReALM) approach aims to solve that, giving voice assistant systems the context they need to be able to respond usefully to seemingly-ambiguous queries.

In the team's paper ReALM is compared against the non-LLM MARRS model and OpenAI's GPT-3.5 and GPT-4 LLMs, testing their ability to understand queries referring to "on-screen entities, conversational entities, [and] background entities." For many tasks, the smallest ReALM model outperformed most models and drew level with GPT-4; for "domain-specific queries" including those referring to smart home devices, it outperformed even GPT-4 β€” despite, the researchers claim, being compact enough to run entirely on device rather than having to send queries off to a remote data center for processing.

A preprint of the ReALM paper is available on Cornell's arXiv server.

Gareth Halfacree
Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire: freelance@halfacree.co.uk.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles