Bring safe generative AI to every client, without adding chaos to your stack.
One Secure AI Workspace and Enterprise Search platform, wrapped in an MSP friendly program you can run as Safe AI and Managed GenAI services.
Your clients’ users are already using AI. They paste sensitive data into tools you do not control, with no policies, no logging, and no real way to answer “how are we using AI” when the owners or auditors ask.
Liminal, delivered through PORT1, gives your MSP a Secure AI Workspace and Enterprise Search layer you can run as a managed service for your clients. We call this Secure AI Workspace-as-a-Service. PORT1 packages it so you can sell, deploy, and support it the way MSPs actually work.
Why MSPs Need a Safer Path to Generative AI
Shadow AI is already in your client base
Users are using tools like ChatGPT, Claude, Perplexity and others to do their jobs more efficiently. They may be copying client data, PHI, financial records, and internal documents into tools that sit completely outside your security stack.
Traditional security tools do not see AI usage
EDR, email security, web filters, and firewalls were not built to understand prompts and responses to AI models. They cannot reliably control what goes into models or what comes back out.
Leaders are stuck between “no AI” and “hope for the best”
Banning AI drives it underground. Allowing anything creates real exposure. Liminal creates a third option: a safe, governed AI environment your MSP can run for them.
Your clients will use GenAI with or without you
A Secure AI Workspace with Liminal lets you get in front of that demand, offer a better GenAI experience, protect client data, and create a managed service that keeps you at the center of your clients’ AI journey instead of watching them go around you.
What Liminal gives you and your clients
Liminal is an AI security, governance, and enablement platform.
It sits between:
Users
People in your MSP and at your clients who use AI for everyday work.
AI Models
Multiple providers behind one consistent experience.
Business Data
Internal sources and SaaS content used safely in AI interactions.
Security Controls
Policies, access rules, and visibility that sit in the middle.
Its core promise is simple: enable generative AI everywhere work happens while protecting sensitive data, enforcing policies, and giving you real visibility. On top of Liminal, you deliver a Secure AI Workspace-as-a-Service: one governed AI workspace per client where users can work with leading models safely, and where you are in control.
Key Capabilities
Secure multi-model chat
One Secure Chat Workspace
A single place for users to work with AI under your rules.
Leading Model Choice
Access popular models in one interface, without tool sprawl.
Role-Based Model Access
Control who can use which models and when.
Sensitive Data Guardrails
Detect and protect data before anything reaches a model.
Enterprise Search with Spaces
Spaces with Permissions
Organize access by team or use case, aligned to policy.
Connect Key Repositories
Microsoft 365, Google Workspace, file shares, and more.
Access Controls Respected
Answers follow what users are allowed to see.
Knowledge Becomes Usable
Turn scattered docs into a living assistant for users.
Liminal Console and insights
Central Admin Console
Manage AI access, governance, and settings in one place.
Fine-Grained Policies
Set rules by user, team, model, and data type.
Real-Time Logs and Alerts
See what happened and respond quickly when it matters.
Adoption and Value Insights
Understand usage patterns and high-impact opportunities.
Desktop app, browser extension, and SDK
Meet Users Where They Work
Bring Secure AI into browser and desktop workflows.
Governed AI in Apps
Use the SDK to keep AI consistent, logged, and controlled.
Custom assistants for more tailored outputs
Build Secure Assistants
Create and share assistants tuned to real business needs.
Attach Supporting Content
Tailor outputs by client, department, or use case.
Why Liminal with PORT1 is built for MSPsLiminal is enterprise grade.
Liminal is enterprise grade. PORT1 makes it MSP friendly.
Flexible Licensing Model
Monthly per-user licensing with a 10-user minimum per tenant.
Clear Tenant Patterns
Support internal teams and client environments cleanly.
Repeatable Service Plays
Package Safe AI and Managed GenAI into recurring services.
MSP-Focused Enablement
Messaging and delivery guidance built for provider workflows.
White Label Options
Brand the workspace with your logo or your client’s.
Support That Understands MSPs
Help from a team that knows managed services, not just software.
What this Means for You:
Differentiated Managed Services
Turn AI risk into recurring services, not one-off projects.
Reduced Provider Exposure
Bring unmanaged AI use under a governed layer.
Stronger QBR Conversations
Use reporting to reinforce value and guide improvements.
Say Yes to GenAI, Safely
Enable adoption with guardrails and visibility.
Core service offers you can build on Liminal
These are the offers most MSPs start with. You can adapt the naming and packaging to your own stack. Secure AI Workspace-as-a-Service – everyday Safe AI for users
Give each client a safe, governed AI workspace their users can use every day.
Knowledge-Worker Teams
Organizations drafting, summarizing, researching, and communicating daily.
AI Use Is Already Happening
They are experimenting, or they will be soon.
Data Sensitivity Matters
They handle PII, PHI, financials, contracts, or IP.
Leadership Wants Guardrails
Owners and auditors want clarity, control, and visibility.
What You Deliver:
Tenant Setup and Configuration
Stand up the workspace with sensible defaults.
Baseline Data Policies
Guardrails for PII, PHI, financial data, and internal content.
Role-Based Access Control
Map access to the client’s org structure and risk profile.
Multi-Model Chat for Users
One place to use leading models safely.
Ongoing Policy Tuning
Refine controls based on real usage and feedback.
Reporting for QBRs
Show adoption, trends, and ongoing value.
How to Position It:
Secure AI Workspace
A secure AI workspace your users can use every day.
Guardrails and Visibility
One place to use AI with policy, logging, and leadership clarity.
AI Governance and Shadow AI Assessment – understand and shape current AI usage
What You Deliver
Stakeholder Discovery
Interviews and policy review to understand current reality.
Current-State AI Mapping
Where AI is used today and what is most exposed.
Controls and Recommendations
Policies and guardrails to reduce risk quickly.
Rollout Roadmap
A clear plan to move usage into a governed workspace.
How it Ties into Managed Services
One Governed Destination
Move AI usage into a controlled, auditable workspace.
Recurring Governance Motion
Turn findings into ongoing service and periodic reviews.
Enterprise Search Enablement – AI that understands client data
When to Position It:
After Core Adoption
Once the workspace is in daily use.
They Want AI on Their Data
The moment they ask, “Can AI use our docs?”
Search Is Slowing Work Down
Users waste time hunting across systems.
Knowledge Is Fragmented
Critical info is spread across tools and folders.
What You Deliver:
Source Prioritization
Identify which systems matter most first.
Connector Setup and Testing
Configure access and validate permissions.
Spaces by Team or Use Case
Structure knowledge access with governance in mind.
User Training and Adoption
Teach safe, effective ways to ask and work.
Values for clients:
Faster Answers
Reduce time spent searching across systems.
Permissions Stay Enforced
AI respects existing access controls.
Safer Knowledge Access
Less copy and paste into unmanaged tools.
More Value from Existing Content
Make documentation usable, not forgotten.
How MSPs can use Liminal internally
You can consider using Liminal inside your own MSP as one of the first proof of value environments, alongside or in addition to an early adopter client. This builds internal knowledge and real examples without slowing down sales opportunities.
Service Desk and Support
Summarize tickets, draft responses, and speed up resolution.
Engineers and Delivery
Draft SOPs, troubleshoot faster, and document repeatably.
Sales, AMs, and vCIOs
Draft proposals, SOWs, and QBRs. Summarize meetings and follow ups.
Leadership and Operations
Draft policies, summarize reports, and support planning.
All of it happens inside a secure workspace instead of random AI tools. That reduces your own risk and gives you live stories to share with clients.
How MSPs can use Liminal internally
You can consider using Liminal inside your own MSP as one of the first proof of value environments, alongside or in addition to an early adopter client. This builds internal knowledge and real examples without slowing down sales opportunities.
Service Desk and Support
Summarize tickets, draft responses, and speed up resolution.
Engineers and Delivery
Draft SOPs, troubleshoot faster, and document repeatably.
Sales, AMs, and vCIOs
Draft proposals, SOWs, and QBRs. Summarize meetings and follow ups.
Leadership and Operations
Draft policies, summarize reports, and support planning.
All of it happens inside a secure workspace instead of random AI tools. That reduces your own risk and gives you live stories to share with clients.
How Liminal helps your clients
For MSP owners and principals:
Safe AI and Managed GenAI Revenue
New recurring services that are defensible and sticky.
Clear Differentiation
A concrete AI offer in crowded markets.
Reduced Client Risk Exposure
A structured, auditable layer around AI use.
Stronger Retention Levers
Ongoing reporting and governance reinforce value.
For MSP technical and security leads:
Policy and Visibility Layer
Govern AI use across users and clients with control.
Tenant Boundaries and RBAC
Structure that fits MSP delivery models.
Prompt-Level Observability
See real AI interactions, not just website categories.
Operational Clarity
Logs and insights for security workflows and QBRs.
For MSP sales and account managers:
Repeatable Talk Tracks
Messaging that works in every QBR.
Concrete Offers
Move beyond vague “we do AI too” claims.
Exec-Friendly Outcomes
Productivity, safety, and support for compliance efforts.
Clear Expansion Paths
Phase two value with client data and knowledge use cases.
Liminal’s security model centers on three steps.
Detect
Sensitive Data Identification
Detect sensitive data before anything is sent to a model.
Coverage Across Data Types
PII, PHI, PCI, compliance terms, and client-specific patterns.
Protect
Policy Actions
Mask, redact, warn, or allow based on policy.
Flexible Scope
Rules vary by user, group, tenant, and model.
Rehydrate
Authorized Context Restored
Protected terms rehydrate for approved users.
Models Stay Protected
Models and logs only see what policy allows.
This approach helps reduce the risk of data leakage, supports compliance strategies, and gives you a strong story with auditors and owners, without promising automatic compliance or absolute security.
Liminal also reports third party audits and attestations that are important for regulated SMBs, such as HIPAA and SOC 2, with additional frameworks in progress.
Featured white paper: The Practical Path into AI for Managed Service Providers
The Practical Path into AI for Managed Service Providers: Turning Secure AI Workspaces into Revenue
A white paper from PORT1’s founder that explains why AI is already inside your clients’ businesses, why Secure AI Workspaces are the most natural AI move for Managed Service Providers and Managed Security Service Providers, and how you can turn Liminal into a recurring, differentiated service instead of a one off project.
Vertical Examples You Can Lead With
Liminal is designed for sensitive and regulated environments that still need AI driven productivity.Healthcare and clinics
Governed Clinical Content
Draft documentation and communication in a controlled workspace.
PHI Guardrails
Reduce PHI exposure with policy enforcement.
Access and Visibility
Support expectations around access control and logging.
Safer Staff Productivity
Enable AI use without unmanaged tools.
Financial services, banking, and credit unions
Policy and Procedure Support
Summarize guidance and procedures with guardrails.
Secure Work Over Internal Docs
Use AI on internal content while respecting permissions.
Protect Financial Data
Reduce exposure of payment and account information.
Audit-Friendly Visibility
Logging and governance support oversight needs.
Insurance agencies and brokers
Faster Coverage Summaries
Draft content and communication with guardrails in place.
Claims Data Protection
Reduce exposure of customer and claims information.
Internal Training Support
Create consistent internal enablement content safely.
Governed Daily Use
Keep work AI in an approved environment.
Education
Staff Productivity
Draft and summarize content with guardrails.
Student Data Protection
Reduce exposure of student and family information.
Clear Usage Boundaries
Keep staff AI use in an approved workspace.
Visibility for Leadership
Support oversight without blocking adoption.
Biotech and life sciences
Support Knowledge Work
Work faster with complex documentation and research content.
Protect Proprietary IP
Reduce exposure while using AI for exploration.
Safer Summarization
Summarize technical material with policy guardrails.
Governed Model Access
Keep AI use visible and controlled.
These are all places where your MSP is already trusted. Liminal lets you extend that trust into AI.
Do clients still need separate tools like ChatGPT
No. Your clients do not need a dozen different AI tools for everyday work.
How model access works conceptually:
One Front Door for AI
Users access AI through Liminal, not many separate sites.
Multi-Model on the Backend
Liminal connects to multiple providers behind the scenes.
Governed by Design
Policy and logging apply to every interaction.
Simplified User Experience
Less sprawl, more consistency for work AI.
What this means for MSPs:
Reduce Tool Sprawl
Often reduce or eliminate separate AI subscriptions for daily work.
Standardize the Workspace
One AI experience under your policies and logging.
Support Special Requirements
Accommodate unique needs when they arise.
Keep Control Centralized
Make AI usage manageable, visible, and repeatable.
Deeper licensing discussions can be handled one on one with PORT1.
See value with Liminal in three steps
1.
Launch a proof of value for a client or your own MSP
Pick Your First Environment
Start with a priority client, your MSP, or both.
Stand Up a Safe Pilot
Conservative policies and a clear set of use cases.
2.
Move one tenant into production and define the managed service
Move One Tenant to Production
Keep the best pilot running as the baseline environment.
Define the Managed Service
Align packaging, governance, and QBR reporting with PORT1.
3.
Roll out to more clients with a repeatable offer
Make the Offer Repeatable
Turn the winning pilot into a standard service package.
Bring It to Every Account
Make it part of QBRs and new business conversations.
Each step uses the same platform and builds recurring service revenue rather than one off projects.
Resources You Can Download
Learn more with one simple click.
If you are ready to move beyond vague “we do AI too” claims
Liminal for MSPs gives you a concrete Secure AI Workspace-as-a-Service your clients can understand, trust, and invest in.