LLMs

Raphaël Millet
Members Public

BlindChat - Our Confidential AI Assistant

Introducing BlindChat, a confidential AI assistant prioritizing user privacy through secure enclaves. Learn how it addresses data security concerns in AI applications.

Daniel Huynh
Members Public

Privacy Risks of LLM Fine-Tuning

This article explores privacy risks in using large language models (LLMs) for AI applications. It focuses on the dangers of data exposure to third-party providers during fine-tuning and the potential disclosure of private information through LLM responses.

Daniel Huynh
Members Public

Introducing BlindChat Local: ​ Full In-Browser Confidential AI Assistant

Discover BlindChat, an open-source privacy-focused conversational AI that runs in your web browser, safeguarding your data while offering a seamless AI experience. Explore how it empowers users to enjoy both privacy and convenience in this transformative AI solution.

Daniel Huynh
Members Public

Introducing BlindLlama, Zero-Trust AI APIs With Privacy Guarantees & Traceability

Introducing BlindLlama: An open-source Zero-trust AI API. Learn how BlindLlama ensures confidentiality and transparency in AI deployment.

Daniel Huynh
Members Public

The Enterprise Guide to Adopting GenAI: Use Cases, Tools, and Limitations

Generative AI is revolutionizing enterprises with enhanced efficiency and customer satisfaction. The article explores real-world applications and deployment options like SaaS, on-VPC commercial FMs, and on-VPC open-source FMs, emphasizing the need for data protection.

Daniel Huynh
Members Public

Attacks on AI Models: Prompt Injection vs. Supply Chain Poisoning

Comparison of prompt injection & supply chain poisoning attacks on AI models, with a focus on a bank assistant. Prompt injection has a limited impact on individual sessions, while supply chain poisoning affects the entire supply chain, posing severe risks.

Daniel Huynh
Members Public

Open Source Is Crucial for AI Transparency but Needs More Tooling

AI model traceability is crucial, but open-source practices alone are inadequate. Combining new software and hardware-based tools with open sourcing offers potential solutions for a secure AI supply chain.

Daniel Huynh
Members Public

Ai-Assisted Code Generation With Privacy Guarantees: Securely Deploy SantaCoder With BlindBox on Azure

In this article, we'll demonstrate how you can efficiently analyze code at scale while maintaining privacy. We'll use BlindBox, our open-source secure enclave tooling, to serve StarCoder with privacy guarantees on Azure.

Raphaël Millet
Members Public

Mithril X Tramscribe: Confidential LLMs for Medical Voice Notes Analysis

How we partnered with Tramscribe to leverage LLMs deal with Medical voice notes analysis

Daniel Huynh
Members Public

Mithril Security Raised €1.2 Million to Protect LLM Users' Data

With BlindBox, you can use Large Language Models without any intermediary or model owner seeing the data sent to the models. This type of solution is critical today, as the newfound ease-of-use of generative AI (GPT4, MidJourney, GitHub Copilot…) is already revolutionizing the tech industry.

Daniel Huynh
Members Public

Announcing Blindbox, a Secure Infrastructure Tooling to Deploy LLMs, Available on Confidential Containers on Azure Container Instances

We are excited to introduce BlindBox, our latest open-source solution designed to enhance SaaS deployment security. Our tooling enables developers to wrap any Docker image with isolation layers and deploy them inside Confidential Containers.