Canonical is preparing to add AI features to Ubuntu over the next year, according to a blog post from Jon Seager, the company’s vice president of engineering.
Seager said the features will appear in two categories. Some will use AI models in the background to improve existing operating system features, others are intended to introduce “AI-native” workflows for users who choose to use them. Planned Ubuntu AI features include speech-to-text, text-to-speech, improved accessibility features, troubleshooting support, and personal automation. Canonical said its approach will prioritize model transparency, open-weight models, open-source harnesses and local inference where possible.
Canonical’s internal AI usage
Canonical’s plan also includes how its own engineering teams use AI tools. Seager said the company has made more targeted internal efforts to help engineers understand where AI tools are useful without measuring staff based on token usage or the amount of code written with AI.
“I will not judge the people at Canonical by how often they use AI, but will continue to judge them by how well they deliver,” Seager wrote.
Canonical also encourages teams to test different AI tools, which Seager says will help the company figure out where the tools will be useful. Seager said AI could help with development and training and that production code needs to remain controlled and auditable.
Seager also said that low-quality AI-generated contributions to open source projects have never been acceptable and are not encouraged at Canonical. Engineers and contributors must remain skeptical of AI-generated results.
The company’s plans for Ubuntu divide AI capabilities into “implicit” and “explicit” categories. Implicit AI refers to existing operating system features that are improved using AI models without changing the way users interact with the software: speech-to-text and text-to-speech are examples. He described them as accessibility features that can be improved with local inference and open-weight models.
Explicit AI capabilities would be more visible to users, such as agent-based workflows for writing documents or applications and automating tasks. The company pointed to Ubuntu’s existing Snap packaging model as part of the foundation for risk management.
Local inference and model access
Local inference is at the heart of Canonical’s plan, with models running via snaps on the user’s device, with the aim of reducing the complexity of setting up models and hardware-specific versions. Inference snaps are used to provide optimized components for supported silicon platforms when available.
Inference snaps are also subject to the same containment rules as other snaps. Canonical said this limits model access to the user’s computer and data.
The company also weighs model licensing terms and does not consider access to model weights as the only measure of openness. Seager said Canonical will take a balanced view of licenses when choosing which models to make available in Ubuntu.
This approach reflects a distinction between open-weight models and the broader transparency expectations typically associated with open source software. Canonical said it favors local inference, open source systems and clearly defined interfaces to external services where users need them.
The company is also pursuing newer models that support features like tool invocation. These capabilities allow models to interact with external APIs, search the web, access file systems, and assist in troubleshooting when given permission.
Canonical said it plans to increase work on inference snaps by keeping up with newer model versions and adding optimized variants in more silicon platforms.
Agent workflows for Ubuntu
Seager also described a longer-term goal of making Ubuntu more context-aware. This includes using AI agents to help users navigate the capabilities of Linux workstations, particularly where the desktop ecosystem remains fragmented into many tools and components.
The same approach could extend beyond desktop use. Seager said site reliability engineers managing Ubuntu systems could use AI to interpret logs during incidents, support root cause analysis or carry out planned maintenance tasks under strict controls.
Canonical said such workflows should be based on existing production security measures, including access controls, audit trails, limited permissions and separation between observation and action.
Seager said the question is not just whether companies can trust agents, but also whether agents can work with the same controls already used in production systems. This includes read-only analytics, narrow permissions for actions, and auditability of decisions and results.
He also gave examples of user-focused tasks that could be accomplished using this model, such as troubleshooting a Wi-Fi connection or setting up an open source software forge with security and TLS already configured.
Hardware and efficiency limits
Hardware availability remains a limitation for local inference. Smaller models can run on more common hardware, but are still not comparable to larger models for many tasks.
Seager said Canonical is watching developments in consumer silicon with stronger inference capabilities. He also said performance and energy efficiency need to be taken into account, especially as local accelerators become more powerful.
He pointed out that comparing cloud-based models and on-premises models based solely on speed may miss part of the problem. Local accelerators can also reduce power consumption for inference workloads, which is relevant as more AI functions are expected to run closer to the operating system.
Canonical did not give a specific release date for individual Ubuntu AI features. Seager said the features will be added over the next year as the company deems them mature enough for release.
(Photo by Gabriel Heinzer)
See also: OpenAI brings GPT-5.5 to Codex for coding tasks
Want to learn more about AI and big data from industry leaders? Checkout AI and big data trade fair takes place in Amsterdam, California and London. The comprehensive event is part of TechEx and takes place alongside other leading technology events, click here Here for more information.
AI News is powered by TechForge Media. Discover more upcoming enterprise technology events and webinars Here.