LLMs and Ontologies: The Future of AI
Künstliche Intelligenz, Large Language Model, Ontologie, Semper-KI

LLMs and Ontologies: The Future of AI

Ontologies and Large Language Models can help businesses to enhance their data management. In the interview, Dr. Sanju Tiwari goes hands-on how to use these technologies and how they will shape the future of AI engineering.

Want to skip reading? Listen to the full interview:

In the European Union, the member states endorsed a landmark regulation for Artificial Intelligence, the AI Act. From your point of view, what is Artificial Intelligence? Which technologies qualify as such?

Artificial intelligence (AI) is a technology that enables computers and machines to simulate human intelligence and problem-solving capabilities. Artificial Intelligence encompasses technologies like Machine Learning, Deep Learning, NLP technologies, Computer Vision, Robotics, Expert Systems, Speech Recognition, Recommendation Systems, Autonomous Systems or Cognitive Computing. A few examples of AI in the daily news and our daily lives are digital assistants, GPS guidance, autonomous vehicles, and generative AI tools like Open AI’s Chat GPT and alikes. These technologies involve the development of AI algorithms, modeled after the decision-making processes of the human brain, that can ‘learn’ from available data and make increasingly more accurate classifications or predictions.

If we talk about AI, we basically talk about data. One technology to manage data are ontologies. Ontologies have been one of your main fields of research for many years. Why does the technology seem to be on the forefront of AI engineering just now?

Generally, ontologies are formal representations of knowledge within a specific domain. They consist of a set of concepts, categories, relationships, and rules that define how these concepts interact with each other. They generally provide a structured framework for organizing information and facilitate reasoning about the data.

Ontologies are at the forefront of AI engineering now because they address promising challenges related to semantic understanding, data integration, knowledge representation, and interdisciplinary applications. The advancements in technology and the increasing complexity of AI applications have highlighted the importance of ontologies in creating more intelligent, versatile, and effective AI systems. There are several factors involved in AI engineering such as:

  • Semantic Understanding: Modern AI applications, such as natural language processing (NLP) and computer vision, require a deep understanding of the semantics behind the data they process. Ontologies offer a way to encode this semantic knowledge, allowing AI systems to interpret the context and meaning of information more accurately. This semantic layer is significant for tasks like question answering, information retrieval, and content recommendation.
  • Knowledge Representation and Reasoning: Ontologies facilitate the formal representation of knowledge within a particular domain. This enables AI systems to perform reasoning over this knowledge, making inferences, and generating new insights. This capability is essential for expert systems, decision support systems, and advanced AI applications that require a high level of understanding and reasoning.
  • Knowledge Sharing: In many industries, collaboration and knowledge sharing are key to innovation and problem-solving. Ontologies enable the creation of shared vocabularies and conceptual frameworks that can be used across different teams and organizations, fostering better collaboration and knowledge exchange.
  • Advances in Technology and Tools: There have been significant advances in the tools and technologies for creating, managing, and using ontologies. The development of standards like OWL (Web Ontology Language) and RDF (Resource Description Framework) has provided a robust foundation for ontology-based systems. Additionally, improved ontology editors and reasoners have made it easier for developers to implement and leverage ontologies in AI applications.

If I’m running a small or mid-sized business, what can ontologies help me do?

Ontologies can be extremely valuable for small and mid-sized businesses (SMBs) in various ways by improving data management, enhancing decision-making, and facilitating better communication.

For example in case of different software systems (e.g. CRM, ERP, marketing automation), ontologies can help integrate data from these different systems by providing a unified and consistent data model. This leads to more efficient data handling and reduces errors due to inconsistent data formats.

An ontology can help to improve search functionality by providing a structured way to understand the relationships between different concepts if your business maintains a large repository of documents, product information, or customer interactions. It can represent knowledge in a specific domain such as in a healthcare-related business, or an ontology can model the relationships between symptoms, diagnoses, treatments, and outcomes by providing more informed and accurate decisions, improving operational efficiency and customer satisfaction.

Reusability is an important feature of Ontology for sharing knowledge and collaboration within the organization

In summary, ontologies can streamline operations, enhance data management, and improve decision-making processes. By providing a structured framework for organizing and utilizing knowledge, ontologies enable businesses to operate more efficiently, make better decisions, and ultimately, deliver better products and services to their customers.

In the research project Semper-KI, we aim to implement an independent platform to match 3d-printing-businesses and customers, focussing on dynamic criteria like quality, price and context of usage. For this reason, we craft a designated 3d-printing ontology. What are criteria to develop a well-working ontology?

There are two main criteria. First of all, the ontology should follow a prescribed methodology such as Neon or Methontology. Secondly, it should follow FAIR data principle. This means, it should be findable, accessible, interoperable and reusable.

Other than that, you should follow certain steps in developing an ontology:

  • Define the Scope and Purpose (Requirement specification)
  • Conceptualization: Concept and relationship identification
  • Formalization: Translate the conceptual model into a formal representation using a standard ontology language (e.g., OWL).
  • Define classes, properties, and instances.
  • Implementation: Use standard ontology languages like OWL (Web Ontology Language) and RDF (Resource Description Framework) to represent your ontology formally Ontology modeling
  • Evaluation for accuracy and correctness of modeled data
  • Publication: Structure your ontology in a modular way to facilitate reuse and scalability

Ontologies are intertwined with so-called knowledge graphs. What is their impact on AI engineering?

The knowledge graph term was introduced by Google in 2012, although it was nothing new. There are two important terms: TBox and ABox. TBox stands for Terminology Box. The instance level ist called ABox. The development of a schema is called ontology. When the instances of an ontology get published, it is called a knowledge graph. In short, the knowledge graph is an expansion of the ontology.

Another rising star of AI technologies are Large Language Models. Currently, the most popular model is ChatGPT. How are ontologies and Large Language Models connected?

Ontologies and LLMs are connected through their complementary strengths: ontologies provide structured, explicit knowledge representation, while LLMs offer powerful language understanding and generation capabilities. By integrating these two technologies, we can create more intelligent, context-aware, and capable AI systems that leverage both explicit domain knowledge and the broad language understanding of LLMs.

In some of our research we found that the capabilities of LLMs vary regarding their assistance in ontology engineering. In using LLMs like ChatGPT, it can become a problem that I’m dependant on how they’re trained, for example. How can I make sure that the LLMs I’m using are reliable for the ontology engineering I want to do?

To ensure the reliability of LLMs for ontology engineering, you should fine-tune the models using domain-specific datasets and validate their outputs through rigorous testing and expert review. Employing a human-in-the-loop approach allows domain experts to correct and validate the LLM’s contributions, ensuring alignment with established ontological structures and semantic rules.

Additionally, leveraging existing ontologies for cross-referencing, utilizing ontology management tools for consistency checks, and mitigating biases by ensuring diverse and representative training data will help maintain the model’s accuracy and relevance.

Regular updates and iterative validation processes further ensure that the LLM remains a reliable tool for your ontology engineering tasks.

In the long run: Do you think LLMs will make ontologies obsolete?

LLMs and ontologies serve different primary functions, their strengths can complement each other in many applications. Rather than seeing LLMs as a replacement for ontologies, it is more productive to view them as tools that can work together to create more powerful and versatile AI systems. Ontologies provide the structured knowledge necessary for logical reasoning and interoperability, while LLMs offer flexible and context-aware language understanding.

The future of AI is likely to see increased integration of these technologies, leveraging the best of both worlds to achieve more sophisticated and effective solutions.

Sanju, thank you for the interview.

Dr. Sanju Tiwari holds a Doctor of Philosophy and is professor and senior researcher at various institutions, including the Institute of Computer Applications and Management in New Delhi. She visited the Institut für Angewandte Informatik (InfAI) e.V. in Leipzig due to her contribution of a chapter in Hindi for the research project DBPedia.