vContainer - Container Service for Kubernetes in VNG Cloud | VNG Cloud
vContainer: Container management & orchestration with Kubernetes
vContainer by VNG Cloud is a Kubernetes-based service, ensuring high performance for businesses by deploying containerized applications on the cloud environment.
Kubernetes manages vServer clusters and runs containers on those clusters through deployment, maintenance, and auto-scaling processes. With Kubernetes, you can run any type of application contained in the same toolkit, capable of deployment in both on-premise and cloud environments.
Allowing customers to create clusters based on business requirements: from basic to advanced requirements on Cluster Container such as Non_HA with 1 Master or HA with 3 or 5 Master nodes.
Fully deployable on the initiating VPC, making it easy to integrate from the VPC to Pods.
Providing a Load Balancer service with all features from layer 4 to layer 7, allowing the addition or removal of workers from the vLB.
Allowing users to define min and max number of worker nodes, system will automatically increase/decrease the number of nodes when system needs to scale up/down to run the operation.
Why choose vContainer?
Discover success stories from VNG Cloud's customers, explore how cloud solutions help businesses overcome challenges in management and operations.
- Quick deployment, can be set up in just 2-3 days.
- Stable infrastructure, ensuring smooth operation of CRM and IP call center systems.
- Great interworking capability, assisting Neyu in resolving issues that occur with external vendors.
- Expert team ready for 24/7 technical support.
Containers are self-contained software packages that encapsulate all the required components, including application code along with all the necessary configuration files, libraries, and dependencies to operate in any environment. By doing so, containers create a virtualized operating system that can run seamlessly across various settings, including data centers, the public cloud, or a developer's personal laptop.
Containerization empowers development teams to achieve agility, deploy software with efficiency, and operate at a large scale.
Application containerization is a method of OS-level virtualization used to deploy and run distributed applications without the need to launch an entire virtual machine (VM) for each app. Multiple isolated applications or services can operate on a single host and share the same OS kernel. Containers are compatible with bare-metal systems, cloud instances, and virtual machines, running on Linux and select Windows, and Mac OSes.
Kubernetes is an open-source container orchestration platform that simplifies the development and deployment of applications. Containers package and ship an application's code and dependencies, making it easier to move applications from one computing environment to another. However, as the number and complexity of applications increase and span multiple containers across various infrastructures, managing them becomes challenging.
Kubernetes automates many manual operational tasks related to deploying, updating, scaling, and monitoring multiple containerized applications. By creating an abstraction layer on top of the underlying physical infrastructure, Kubernetes facilitates running and operating applications more efficiently and resiliently.
Kubernetes addresses the following issues:
- Autoscaling: Automatically scale up and down workloads based on application demand, both horizontally and vertically.
- Automated rollouts and rollbacks: Deploy new versions or changes to your application or configuration with ease, and automatically roll back changes if needed.
- Health checks and self-healing: Continuously monitor container health, replace failing containers, and ensure services are only accessible when running successfully.
- Traffic routing and load balancing: Efficiently manage communication between containers across multiple deployment environments and optimize resources to respond to outages or periods of downtime.
- Storage orchestration: Effectively manage storage requirements for stateful applications, including local storage and public cloud providers.
- Secret and configuration management: Store and manage sensitive information and configuration data securely across different environments.
- Service mesh: Standardize communication between services to control container and service sprawl efficiently.
- Service discovery: Automatically manage the list of service instances to access, reducing configuration complexity.
- Authorization and Role-Based Access Control (RBAC): Simplify user access and permissions management across multiple accounts and access levels.
Kubernetes is a robust container orchestration platform; however, it may not be the suitable choice for your organization if you are not encountering the problems it aims to address. Here are some cases where you can consider using Kubernetes:
- Managing more than one service.
- Deploying, updating, and managing software at scale.
- Simplifying and automating various complex management tasks.
- Building cloud-native applications with cross-platform capabilities.
By default, our portal limits to 10 minions to prevent potential backend impacts from users creating excessive instances. Customers needing to deploy more than 10 minions can contact our 24/7 support for assistance.
In actuality, AWS's K8S retains Master Nodes, but these are fully supervised by AWS and remain hidden from user interaction.
Contrarily, VNG Cloud's K8S exposes the Master Node to users, granting them the ability to custom its configuration, all while ensuring smooth operation via VNG Cloud's infrastructure management.
Migrating from AWS to VNG Cloud won't substantially disrupt the system architecture; however, some code adjustments will be necessary to fit with the new environment.