LILY Logo
LILY
DiscoverResearchActivitiesAbout

Quantum Machine Learning

In our team at LILY, we have been grappling with a big question for some time: How can we harness the possibilities of quantum computing to build more intelligent, adaptable, and creative AI systems?

For this purpose, we develop our own quantum models (Quantum Machine Learning, or QML) – AI models that don't run on classical computers but are based on principles of quantum physics. And because quantum computers are not yet widespread, we initially simulate many of these processes on classical computers. The goal is to lay the foundation now for what will soon be possible with quantum computers.

What makes our models special?

Instead of just training one large AI model, we develop an entire family of specialized QML models that take on different tasks – from language processing to drug simulation.

These models share a common foundation: high-dimensional vectors, extremely large number spaces in which complex meanings and relationships can be mapped. In classical AI, such vectors have long been standard – we are now bringing them into the quantum world.

Examples from our model family

  • LLY-DML: This model specifically assigns data to certain quantum states. This allows information to be represented extremely flexibly – ideal for dynamic systems with many possible states.
  • LLY-GP: Here, language is mapped quantum mechanically. Individual words are represented as quantum states – so ambiguous or context-dependent terms can be better processed.
  • LLY-GLLM: This model understands relationships between concepts. It can learn, for example, that "King" + "Woman" should lead to "Queen" – and transfers such logical analogies to new situations.
  • LLY-LLM: A generative language model that, like classical Large Language Models (e.g., GPT), predicts the next term – just in a quantum-like vector space.
  • LLY-HDC: Our model for Hyperdimensional Computing. Here, data is processed in a huge vector space, where not a single state counts, but the interaction of many weak signals. It is particularly well suited for semantic processing, pattern recognition, and symbolic logic.
  • LLY-NN: A model that is oriented towards biological brains. It replicates neural connections, recognizes patterns, and makes decisions – but on a deeper, structurally inspired level.

All models are openly available on GitHub – we explicitly invite experimentation, further development, or use for your own projects.

What does a practical application look like?

A particularly exciting example is our work in drug design and protein research. Here we build a simulation in which molecular properties are represented as high-dimensional vectors. One vector represents a drug, another a target protein.

These vectors are combined – and through the resulting pattern, the system can predict how well the drug works. If the result doesn't fit, individual properties of the molecule are changed – always within the framework of chemical stability. This process runs automatically and can test very many variants in a short time.

This creates an intelligent, learning approach to developing new medicines – not through trial and error, but through vector-based understanding and targeted permutation.

How do we work with creativity and self-reflection?

Artificial intelligence can do more than just recognize patterns. It can also be creative – and we try to model that as well.

In the LILY Framework, we have built in a mechanism that we call "HDC Reflection." Here, the system checks after each processing step: Was the answer meaningful? If not, parts of the vector space are changed – through targeted random influences or combinatorial changes. This creates a process that combines reflection, variation, and creative innovation – similar to our thinking.

Additionally, we also simulate influences that arise in real brains through hormones: for example, that certain concepts or contexts change the "mindset" of the system. We call this hormonal modulation – a mechanism that makes reactions more context-dependent and opens up new possibilities of expression.

What's the point of all this?

Our research lays the foundation for a future where quantum computing and AI merge. LILY should be a platform where models can be developed today that will be commonplace in tomorrow's world.

Whether language, science, medicine, or creative thinking – we want to show that AI can not only calculate faster, but also understand, connect, and create more deeply.

And for that, we must start thinking the new today.

View on GitHub

All models are openly available on GitHub. We invite experimentation, further development, and use for your own projects.

https://github.com/LILY-QML