Liminal_logo_white-msp

Bring safe generative AI to every client, without adding chaos to your stack.

One Secure AI Workspace and Enterprise Search platform, wrapped in an MSP friendly program you can run as Safe AI and Managed GenAI services. 

Your clients’ users are already using AI. They paste sensitive data into tools you do not control, with no policies, no logging, and no real way to answer “how are we using AI” when the owners or auditors ask. 

Liminal, delivered through PORT1, gives your MSP a Secure AI Workspace and Enterprise Search layer you can run as a managed service for your clients. We call this Secure AI Workspace-as-a-Service. PORT1 packages it so you can sell, deploy, and support it the way MSPs actually work. 

Why MSPs Need a Safer Path to Generative AI 

Shadow AI is already in your client base 

Users are using tools like ChatGPT, Claude, Perplexity and others to do their jobs more efficiently. They may be copying client data, PHI, financial records, and internal documents into tools that sit completely outside your security stack. 

Traditional security tools do not see AI usage

EDR, email security, web filters, and firewalls were not built to understand prompts and responses to AI models. They cannot reliably control what goes into models or what comes back out.

Leaders are stuck between “no AI” and “hope for the best”

Banning AI drives it underground. Allowing anything creates real exposure. Liminal creates a third option: a safe, governed AI environment your MSP can run for them.

 

Your clients will use GenAI with or without you

A Secure AI Workspace with Liminal lets you get in front of that demand, offer a better GenAI experience, protect client data, and create a managed service that keeps you at the center of your clients’ AI journey instead of watching them go around you.

What Liminal gives you and your clients 

Liminal is an AI security, governance, and enablement platform. 

It sits between: 

  • Users in your MSP and at your clients 
  • Multiple AI models from different providers 
  • Internal data sources and SaaS apps 

Its core promise is simple: enable generative AI everywhere work happens while protecting sensitive data, enforcing policies, and giving you real visibility. On top of Liminal, you deliver a Secure AI Workspace-as-a-Service: one governed AI workspace per client where users can work with leading models safely, and where you are in control. 

liminal-desktop1

Key Capabilities

Secure multi-model chat 

  • A single chat workspace where users can securely access leading models from providers like OpenAI, Anthropic, Perplexity, Google, and IBM WatsonX 
  • Standard interface across many models 
  • Policy controls on who can use which model and what data is allowed 
  • Detection and protection for sensitive data before it ever reaches a model 
68b8743d8fd1088d8824490a_SecureMultiModelChat-image-p-2000

Enterprise Search with Spaces

  • Governed Retrieval Augmented Generation over internal content, organized into Spaces that follow permissions and policies 
  • Connect to Microsoft 365, Google Workspace, file shares, and other repositories 
  • Respect existing access controls when answering questions 
  • Turn scattered documentation into a living knowledge assistant for users 
68b8d1393398dc95970709ff_EnterpriseSearch-image2

Liminal Console and insights

  • Central console for AI security, administration, and observability 
  • Fine grained policies by user, team, model, and data type 
  • Real time logs and alerts on AI usage 
  • Insights on adoption, impact, and high value use cases 
68bb0fe14f7939dabdb49597_Total Generative AI Oversight, Administration, and Observability@2x-p-800

Desktop app, browser extension, and SDK

  • Bring secure AI into the browser and desktop where people already work 
  • Build or wrap internal apps with Liminal’s SDK so AI features are governed, logged, and consistent across clients 
68b8743d8fd1088d88244919_desktopapplication-p-800

Custom assistants for more tailored outputs

  • Build and share secure, custom assistants that can use different underlying models 
  • Attach supporting documents so each assistant is tuned to a specific client, department, or use case 
68bb0ed7bfe799904a5cad5f_CustomAssistant-2

Why Liminal with PORT1 is built for MSPs

Liminal is enterprise grade. PORT1 makes it MSP friendly. 

With Liminal for MSPs, you get: 

  • A monthly per user licensing model with a 10 user minimum per tenant 
  • Clear tenant patterns that can support both your own staff and client environments 
  • Service play blueprints you can turn into recurring revenue, such as Secure AI Workspace-as-a-Service, AI Governance and Shadow AI Assessment, and Enterprise Search Enablement 
  • Enablement and support from a team that understands how MSPs operate, not just how the software works 
  • The ability to white label the Secure AI Workspace with your MSP brand or your client’s brand 

What this means for you: 

  • Turn AI risk into a differentiated managed service instead of a one time project 
  • Reduce your own exposure when clients use AI 
  • Say Yes to GenAI in a way that is safe, structured, and aligned with your managed services model 

Core service offers you can build on Liminal

These are the offers most MSPs start with. You can adapt the naming and packaging to your own stack. Secure AI Workspace-as-a-Service – everyday Safe AI for users 

Give each client a safe, governed AI workspace their users can use every day. 

Ideal clients: 

  • SMB and lower mid market organizations with knowledge workers 
  • Already experimenting with AI or considering it 
  • Concerned about data leakage, compliance, or reputation 

What you deliver: 

  • Liminal tenant setup and configuration 
  • Baseline policies for PII, PHI, financial data, and internal only content 
  • Role based access control mapped to their org structure 
  • Secure multi model chat for users 
  • Ongoing policy tuning, usage reviews, and guidance on Safe AI usage 

How to position it: 

  • “A secure AI workspace your users can use every day.” 
  • “One place to use AI, with guardrails and visibility for leadership.” 

AI Governance and Shadow AI Assessment – understand and shape current AI usage  

Help clients understand how AI is used today, reduce Shadow AI, and move usage into Liminal as a managed service. 
 

What you deliver:

  • Stakeholder interviews and policy review 
  • Short assessment report that covers: 
    • Current state of AI usage 
    • Key risks and gaps 
    • Recommended Liminal policies and controls 
  • Roadmap to roll out Liminal as the governed AI platform 

How it ties into managed services: 

  • Liminal becomes the safe destination for AI 
  • You turn findings into ongoing Secure AI Workspace-as-a-Service and periodic governance reviews 

Enterprise Search Enablement – AI that understands client data  

Turn internal documentation and systems into a secure AI driven knowledge layer. 

When to position it: 

  • After the client sees value from core Liminal usage 
  • When they say “we want AI that understands our own data” 
  • When users waste time hunting for information across systems 

What you deliver: 

  • Identify and prioritize data sources 
  • Configure connectors and test access controls 
  • Design Spaces per team, department, or use case 
  • Train users to ask questions safely against internal content 

Values for clients: 

  • Less time wasted looking for documents 
  • Policies and permissions still enforced when AI answers questions 
  • AI becomes a safe way to unlock the value of existing documentation 

How MSPs can use Liminal internally 

You can consider using Liminal inside your own MSP as one of the first proof of value environments, alongside or in addition to an early adopter client. This builds internal knowledge and real examples without slowing down sales opportunities. 

Examples: 

  • Engineers: summarize tickets and logs, draft troubleshooting steps, create SOPs 
  • Sales, account managers, and vCIO roles: draft proposals, SOWs, and QBRs, summarize meetings, write follow ups 
  • Leadership and operations: draft policies, summarize reports, support planning and internal communication 

All of it happens inside a secure workspace instead of random AI tools. That reduces your own risk and gives you live stories to share with clients. 

68b73862207fc23f84450161_GettingStarted-img

How Liminal helps your clients  

For MSP owners and principals: 

  • New and defensible recurring services around “Safe AI” and “Managed GenAI” 
  • Clear differentiation in crowded markets 
  • Reduced liability when clients use AI, because there is a structured, auditable layer in place 

For MSP technical and security leads: 

  • A policy engine and visibility layer for AI usage across multiple clients 
  • Strong RBAC and tenant boundaries that make sense in MSP environments 
  • Logs and observability tied to real world prompts and responses, not just web URLs 

For MSP sales and account managers: 

  • Simple narratives and talk tracks you can repeat in every QBR 
  • Concrete offers instead of vague “we do AI too” claims 
  • Stories that resonate with non technical SMB leaders: productivity, safety, and support for compliance efforts 

Liminal’s security model centers on three steps. 

Detect 

  • Identify sensitive data in prompts before it is sent to any model 
  • Detect PII, PHI, PCI, compliance-defined terms, and data unique to each client 

Protect  

  • Apply policies to detected data: mask, redact, warn, or allow 
  • Policies can differ by user, group, tenant, and destination model 

Rehydrate 

  • When responses return, Liminal rehydrates protected terms for authorized users 
  • Users see full context, while models and logs respect the policies you set 

This approach helps reduce the risk of data leakage, supports compliance strategies, and gives you a strong story with auditors and owners, without promising automatic compliance or absolute security. 

Liminal also reports third party audits and attestations that are important for regulated SMBs, such as HIPAA and SOC 2, with additional frameworks in progress. 

68c35e82009824c6d5c2490e_4 (3)

Featured white paper: The Practical Path into AI for Managed Service Providers

The Practical Path into AI for Managed Service Providers: Turning Secure AI Workspaces into Revenue 

A white paper from PORT1’s founder that explains why AI is already inside your clients’ businesses, why Secure AI Workspaces are the most natural AI move for Managed Service Providers and Managed Security Service Providers, and how you can turn Liminal into a recurring, differentiated service instead of a one off project. 

Vertical Examples You Can Lead With 

Liminal is designed for sensitive and regulated environments that still need AI driven productivity. 

Healthcare and clinics 

  • Draft clinical documentation and patient communication inside a governed AI workspace 
  • Help avoid PHI leakage into unmanaged tools 
  • Support HIPAA-driven expectations around access and logging 

Financial services, banking, and credit unions  

  • Summarize policies, procedures, and regulatory guidance safely 
  • Use Enterprise Search over internal docs while respecting permissions 
  • Help reduce the risk of exposing financial or payment data to unmanaged AI tools 

Insurance agencies and brokers 

  • Draft coverage summaries, client communication, and internal training content 
  • Keep customer and claims data within a governed AI environment 

Education 

  • Help educators draft materials and summarize content without exposing student data 
  • Support safe use of AI for staff while keeping student and family information protected 

Biotech and life sciences 

  • Help researchers work with complex documentation and knowledge bases 
  • Protect proprietary IP and research data when using AI for summarization and exploration 

These are all places where your MSP is already trusted. Liminal lets you extend that trust into AI. 

Do clients still need separate tools like ChatGPT 

No. Your clients do not need a dozen different AI tools for everyday work. 

How model access works conceptually: 

  • Users access AI through Liminal, not through many separate sites 
  • Liminal connects to multiple models from different providers on the backend 
  • Your MSP and clients get one governed interface with model optionality, instead of a sprawl of unmanaged AI accounts 

What this means for MSPs: 

  • In many cases, you can reduce or eliminate individual AI subscriptions for day to day knowledge work 
  • You standardize on Liminal as the main AI workspace, under your policies and logging 
  • If a client has special needs, you can still support “bring your own model” arrangements where appropriate 

Deeper licensing discussions can be handled one on one with PORT1. 

6914dd03fc008faad7e64126_Multi-Model

See value with Liminal in three steps 

1.

Launch a proof of value for a client or your own MSP 

  • Choose a priority client, your own MSP, or both as the first environment 
  • Stand up a Liminal tenant with conservative policies and a clear set of use cases to test 

 

2.

Move one tenant into production and define the managed service 

  • Keep the most successful proof of value tenant running in production 
  • Work with PORT1 to define how Secure AI Workspace-as-a-Service and AI Governance fit into your service catalog and QBR conversations 

3.

Roll out to more clients with a repeatable offer 

  • Use what you learned to build a repeatable GenAI service offer 
  • Bring that offer into all client reviews and new business conversations as a standard part of your stack 

 

Each step uses the same platform and builds recurring service revenue rather than one off projects. 

Resources You Can Download 

Learn more with one simple click.

If you are ready to move beyond vague “we do AI too” claims

Liminal for MSPs gives you a concrete Secure AI Workspace-as-a-Service your clients can understand, trust, and invest in.

 

liminal-desktop1