Gcore Launches Flexible GPU Virtual Machines on NVIDIA Hopper
  • News
  • Europe

Gcore Launches Flexible GPU Virtual Machines on NVIDIA Hopper

The new offering provides flexible, cost-efficient access to AI compute for agile development.

3/31/2026
Ghita Khalfaoui
Back to News

Gcore, a global provider of edge AI and cloud solutions, has announced the launch of its new GPU Virtual Machines powered by NVIDIA's Hopper architecture. This service aims to deliver flexible and cost-effective access to high-performance AI computing for a diverse range of customers. The initial rollout is taking place in Gcore's sovereign AI region in Portugal, directly addressing the increasing demand for European-based infrastructure.


Addressing the Demand for Agile AI Infrastructure

As artificial intelligence development becomes more central to business operations, the need for dynamic infrastructure has grown significantly. Organizations require solutions that can scale efficiently to support iterative workflows and experimentation. Gcore's latest offering is designed to meet this demand by providing on-demand access to powerful computing resources.

The service is launching first in Gcore's Sines-3 facility in Portugal, a designated sovereign AI region. This strategic location caters to the rising European demand for AI infrastructure that complies with regional data sovereignty regulations. It makes the platform particularly attractive for EU research labs and startups focused on maintaining data locality.

Flexible and Cost-Efficient Compute Power

Unlike traditional dedicated servers, Gcore's GPU VMs do not require a long-term hardware commitment, offering greater agility. This model is ideal for organizations with fluctuating workloads, such as startups in their growth phase or labs conducting short-term experiments. It allows users to precisely match their compute capacity to their project's current needs.

A key feature of the new service is its innovative billing model, which helps organizations significantly reduce idle costs. When a virtual machine instance is powered off, the GPU billing automatically pauses, with charges only applying to storage and IP addresses. This allows teams to manage their budgets more effectively without losing their configurations or data.

The platform provides considerable scalability, enabling users to adapt their resources as their computational demands evolve. Teams can begin with a single NVIDIA Hopper GPU and seamlessly scale up to a two, four, or even an eight-GPU virtual machine. This flexibility ensures that the infrastructure can support projects from initial proof-of-concept to large-scale training runs.

Integration into Gcore's AI Ecosystem

These new GPU VMs are the latest addition to the comprehensive Gcore GPU Cloud, which already includes Bare Metal and Spot Bare Metal GPU solutions. Customers can easily combine and switch between these offerings through the Gcore Customer Portal. This integrated approach provides users with precise control over how their GPU compute is utilized and paid for.

Seva Vayner, Product Director for Cloud Edge & AI at Gcore, stated that the launch aligns with the company's mission to democratize AI access. He emphasized that the service provides the necessary flexibility and cost efficiency for a wide range of users. This includes startups, SMBs, and research institutions that need powerful, on-demand sovereign infrastructure.


Gcore's introduction of NVIDIA Hopper-based GPU VMs marks a significant step in making high-performance AI infrastructure more accessible and affordable. By combining powerful hardware with a flexible, consumption-based billing model, the company addresses a critical need in the rapidly evolving AI landscape. This launch strengthens Gcore's position as a key provider of sovereign AI cloud solutions within Europe and beyond.