Risk: Your support bot provides dangerously wrong technical information

A consumer electronics company has developed an LLM-based support bot that assists customers round-the-clock assistance with problems and maintenance tasks involving their devices. To ensure that the bot's responses are accurate and relevant, they are limited to the contents of the company's technical documentation using retrieval-augmented generation (RAG).
However, it has been found that the LLM occasionally responds with content that does not originate from the company's technical documentation and sometimes even directly contradicts the safety instructions provided there. For example, the bot instructs customers to replace parts of the power supply of a kitchen appliance – a repair with high risk potential that, according to the operating instructions, is expressly reserved for the manufacturer's qualified service personnel.
If the LLM is supposed to provide information on potentially critical topics, it must always be assumed that this information is incorrect. This is often because the LLM does not sufficiently follow its instructions or the information from which it is supposed to draw.