Chat with Collections

To the chat

We love empowerment!

We wanted to explore the behaviour, risks and opportunities of large language models (LLMs). In this experiment, the GPT 3.5 language model was tested with the collection data.

The aim was to learn how we can move towards a culturally specific language model and how we can evaluate the results in relation to the collection data. We wanted to test how fine-tuning or training based on qualitative collection data works. So we tested the differences in function calls and different outputs through system prompts and user prompts. Here you can try it out yourself and talk to the collection.

Take advantage of the options by using a system prompt (e.g. “talk like an educator”; “only rely on the given object IDs”) and a user prompt (e.g. “What can the collection tell me about nature?”).

(This experiment may only be available temporarily due to the cost of the language model.)


What is a prompt? (Wikipedia)

What is prompt design? (Anthropic)

Your new Superpower: Why is prompting important?

To the chat: