Distributed Systems, Operating Systems and Secure AI

Image
slide_expl_vcampus3_ohne.jpg
Group leader
Alexander Tormasov Profile Picture
Professor of Computer Science
Persistent OS

This project investigates persistent computing approaches in the context of modern computer architecture. While persistent operating systems have existed for decades, the computing landscape has fundamentally changed — in terms of networking, storage, and performance requirements. The focus is on studying the advantages of persistent systems under modern architectural constraints, including fault tolerance, power efficiency for embedded and high-demand systems, and the development of algorithms that leverage persistence.

The project uses several existing projects as technological testbeds: in particular, PhantomOS — which the group has ported — and the Genode Framework, which operates on microkernel and hypervisor systems. Open-source development enables broad experimentation with new system-level concepts.

Skynet Project

The Skynet project investigates a novel paradigm for distributed computation across two complementary directions: (1) aggregation of underutilized resources from nearby devices, and (2) highly scalable, internet-scale heterogeneous systems. The goal is to build a universal virtual machine that is resilient to variability in the performance, availability, and network capabilities of individual nodes. This system is designed to support modern algorithms, including specialised forms of homomorphic encryption and integrated security mechanisms, enabling secure and fault-tolerant distributed computation.

Objectives:

  • Demonstrate aggregation of distributed compute resources
  • Enable execution of simple distributed operations as a foundation for more complex workloads
  • Develop a fault-tolerant distributed computation model suitable for operations such as matrix–vector multiplication
  • Implement security-aware execution supporting homomorphic encryption and other cryptographic algorithms within the distributed computation model

 

Multi-tenant LLM Deployment and Secure Inference

This project addresses efficient and secure deployment of large language models (LLMs) in multi-user environments.

Objectives:

  • Efficient LLM sharing
    • Multi-user deployment with minimal memory overhead
    • Comparison of fine-tuning strategies: full tuning, layer freezing, and LoRA
  • Encryption of LLM layers
    • Design of split-process architecture (secure local process + cloud process)
    • Evaluation of homomorphic encryption approaches (Paillier, CKKS, PHE
      /FHE algorithms)
  • Low-bit LLM architectures
    • Integration of BitNet-style quantization (ternary weights)
    • Analysis of encryption overhead and feasibility
Group composition & projects/funding

The groups consist of student-led research projects under the supervision of Prof. Dr. Alexander Tormasov, focusing on:

  • Phasntom OS
  • Distributed computation
  • AI systems and secure model deployment

Projects are developed within the framework of Constructor Knowledge Labs.