Network
VPC
Without a VPC network, there are no routes !
Without a VPC network, you cannot create VM instances, containers or App Engine applications.
Networks - Subnets
Network Overview for VMs :
Every VM is part of a VPC network. VPC networks provide connectivity for your VM instance to other Google Cloud products and to the internet.
VPC networks can be auto mode or custom mode.
-
Auto mode networks have one subnetwork (subnet) in each region.
All subnets are contained within this IP address range: 10.128.0.0/9. Auto mode networks support only IPv4 subnet ranges.
One subnet in each region: Fixed /20 subnetwork, can be expanded to /16 but no larger.Alternatively, you can convert the auto mode subnetwork to a custom mode subnetwork to increase the IP range further.
Also, avoid creating large subnets. Overly large subnets are more likely to cause CIDR range collisions. -
Custom mode networks don't have a specified subnet configuration; you decide which subnets to create in regions that you choose by using IPv4 ranges that you specify. Custom mode networks also support IPv6 subnet ranges.
Unless you choose to disable it, each project has a default network, which is an auto mode network.
VPC Scope
VPC Network is Global.
VPC Subnet is Regional.
IP adresses
- VM interface has an internal IPv4, which is allocated from the subnet.
- Can optionnaly have an external IPv4/Ipv6.
- VM can have alias IP range( assigned block IP).
If the VM does not have external Ipv4/Ipv6 to communicate with the Internet, It can use Cloud NAT.
RFC 1918 IP adresses :
- class A: 10.0.x.x
- class B: 172.16.x.x
- class C: 192.168.
VM in separate VPC cannot ping their internal IP, unless you use VPC Peering.
VPC Network Peering
VPC Peering allows internal IP address connectivity across two VPC networks
regardless of whether they belong to the same project or the same organization.
It enables you to connect VPC networks so that workloads in different VPC networks can communicate internally.
Traffic stays within Google's network and doesn't traverse the public internet.
Key Properties
Some Peered VPC networks key properties :
- VPC Network Peering works with Compute Engine, GKE, and App Engine flexible environment.
- Peered VPC networks remain administratively separate. Routes, firewalls, VPNs, and other traffic management tools are administered and applied separately in each of the VPC networks.
- Each side of a peering association is set up independently. Peering will be active only when the configuration from both sides matches. Either side can choose to delete the peering association at any time.
- A given VPC network can peer with multiple VPC networks, but there is a limit(25).
Some Restrictions
- VPC Network Peering supports IPv4 connectivity only. You can configure VPC Network Peering on a VPC network that contains dual-stack subnets.However, there is no IPv6 connectivity between the networks.
- A subnet CIDR range in one peered VPC network cannot overlap with a static route in another peered network. This rule covers both subnet routes and static routes.
- A dynamic route can overlap with a subnet route in a peer network. For dynamic routes, the destination ranges that overlap with a subnet route from the peer network are silently dropped.
- You cannot use a tag or service account from one peered network in the other peered network.
- Only directly peered networks can communicate. Transitive peering is not supported
VPC Peering works within project, across projects, across organizations, it is decentralized network administration.
Shared VPC
Shared VPC allows an organization to connect resources from multiple projects to a common VPC network, so that they can communicate with each other securely and efficiently using internal IPs from that network. When you use Shared VPC, you designate a project as a host project and attach one or more other service projects to it.
Shared VPC only works across projects in the same organisation(not in the same project, not across organizations), it is centralized network administration.
Cloud NAT
Cloud NAT provides outgoing connectivity for the following resources :
- Compute Engine virtual machine (VM) instances without external IP addresses.
- Private Google Kubernetes Engine (GKE) clusters.
- Cloud Run instances through Serverless VPC Access.
- Cloud Functions instances through Serverless VPC Access.
- App Engine standard environment instances through Serverless VPC Access.
Cloud NAT relies on custom static routes whose next hops are the default internet gateway.
VM instances that have no external IP addresses can use Private Google Access(PGC) to reach external IP addresses of Google APIs and services. By default, Private Google Access is disabled on a VPC network.
Although after enabled PGC Internal-VM can now access certain Google APIs and services without an external IP address, the instance cannot access the internet for updates and patches.
Configure a Cloud NAT gateway, which allows Internal-VM to reach the internet
Cloud NAT - IAP
# --tunnel-through-iap : IAP TCP forwarding enable SSH/RDP access to VM instances that
# do not have external IP addresses or do not permit direct access over the internet.
gcloud compute ssh Internal-VM --zone us-central1-c --tunnel-through-iap
# CLI: https://cloud.google.com/nat/docs/set-up-manage-network-address-translation#gcloud
More details about private VM connectivity & IAP.
Serverless VPC Access Connector
Some services like Cloud Run, Cloud Functions are not part of VPC.
They can't connect with private resources that only have an internal IP address in the VPC (VM/MemoryStore instance ...)
But the solution is straightforward. You can create a VPC Access connector to access internal IP addresses.
A VPC Access connector is a resource in the VPC that acts as a router to forward traffic from Cloud Run services.
The VPC Access connector is not serverless, you are charged for having a connector enabled even if it does not forward connections.
CLI Cloud Run examples.
Hybrid Connectivity
Connecting to an on premises data center.
Cloud Intertconnect
Cloud Interconnect provides low latency, high availability connections between your on-premises and VPC networks.
Also, Interconnect connections provide internal IP address communication, which means internal IP addresses are directly accessible from both networks.
Cloud Interconnect offers two options for extending your on-premises network :
- Dedicated Interconnect provides a direct physical connection between your on-premises network and Google's network.
- Partner Interconnect provides connectivity between your on-premises and VPC networks through a supported service provider.
Some Benefits
- Traffic between your on-premises network and your VPC network doesn't traverse the public internet.
- Your VPC network's internal IP addresses are directly accessible from your on-premises network. You don't need to use a NAT device or VPN tunnel to reach internal IP addresses.
- You can use Cloud Interconnect with Private Google Access for on-premises hosts so that on-premises hosts can use internal IP addresses rather than external IP addresses to reach Google APIs and services.
- Dedicated Interconnect, Partner Interconnect, Direct Peering and Carrier Peering can all help you optimize egress traffic from your VPC network and reduce your egress costs. Cloud VPN by itself does not reduce egress costs.
Interconnect services provide direct access to RFC1918 IP addresses in your VPC with an SLA.
Peering
- Direct Peering enables you to establish a direct peering connection between your business network and Google's edge network and exchange high-throughput cloud traffic.
- Direct Peering exists outside of Google Cloud. Unless you need to access Google Workspace applications, the recommended methods of access to Google Cloud are Dedicated Interconnect or Partner Interconnect.
- Carrier Peering is the same as Direct Peering, but done through a service provider. It enables you to access Google applications, such as Google Workspace, Youtube.
Peering services in contrast(of Interconnect services) are for access to Google public IP addresses only without an SLA.
Cloud VPN
Cloud VPN securely connects your peer network to your VPC network through an IPsec VPN connection.
If you use a Cloud VPN tunnel to connect your networks, you can use Cloud Router to establish a BGP session with a router in your peer network.
- HA VPN supports dynamic routing(BGP) only and supports both Ipv4 & IPv6 traffic.
- Classic VPN supports Dynamic routing & Static routing and supports Ipv4 traffic only.
CLI examples.
Load Balancing
Components
Some components :
- Forwarding rule : And its corresponding IP address represent the frontend configuration of a Google Cloud load balancer.
- Target proxies : Are referenced by one or more forwarding rules. They terminate connections from the client and creates new connections to the backends. In case of HTTP LB, proxies route incoming requests to a URL map.
- URL Map : Is a configuration resource used to route requests to backend services or backend buckets. A single URL map can route requests to different destinations based on the rules configured in the URL map.
- Backend services: Defines how Cloud Load Balancing distributes traffic. It contains a set of values such as the protocol used to connect to backends, various distribution and session settings, health checks and timeouts.
- Cloud CDN : Can be enabled.
- Cloud Amor: Can be enabled for protrection against DDos.
- IAP (Identy-Aware-Proxy) for more protection.
- Target Pools : Can use either a backend service or a target pool to define the group of backend instances that receive incoming traffic.
- So the architecture of a network LB depends on whether you use a backend service-based network load balancer or a target pool-based network load balancer.
- IPv6 is supported only by TCP proxy LB, SSL proxy LB and HTTP(s) LB.
- LB CLI examples.
- Cloud Run LB examples.
Network LB is regional, non-proxied LB service, all traffic is passed through the load balancer instead of being proxied and the traffic can only be balanced between VM instances that are in the same region, unlike a global load balancer.
AutoScaling
To dynamically add/remove instances with following policies :
- CPU utilization.
- LB capacity.
- Monitoring metrics.
- Queue-based workloads.