1666061064
In this đđđ đđđ«đ«đđđšđ«đŠ tutorial will give you an overview of Terraform with Google Cloud Platform and will help you understand various important concepts that concern GCP Terraform with practical implementation. GCP Terraform Tutorial | What Is Terraform | Terraform With Google Cloud Platform
INTRODUCTION
The purpose of this article is to show a full Google Cloud Platform (GCP) environment built using Terraform automation. Iâll walk through the setup process to get Google Cloud Platform and Terraform. I will be creating everything from scratch: VPC network, four sub-networks â two in each region (labeling private and Public), firewall rules allowing HTTP traffic and ssh access, and finally creating two virtual instances one in each sub-network running as a web server.
At the end of my deployment, I will have a Google Cloud Platform (GCP) environment setup with two web servers running in different regions as shown below:
GCP Environment and Terraform directory structure
Letâs get started with defining some terms and technology:
Terraform: a tool used to turn infrastructure development into code.
Google Cloud SDK: command line utility for managing Google Cloud Platform resources.
Google Cloud Platform: cloud-based infrastructure environment.
Google Compute Engine: resource that provides virtual systems to Google Cloud Platform customers.
You might be asking â Why use Terraform?
Terraform is a tool and has become more popular because it has a simple syntax that allows easy modularity and works against multi-cloud. One important reason people consider Terraform is to manage their infrastructure as code.
Installing Terraform:
It is easy to install it, if you havenât already. I am using Linux:
sudo yum install -y zip unzip (if these are not installed)
wget https://releases.hashicorp.com/terraform/0.X.X/terraform_0.X.X_linux_amd64.zip (replace x with your version)
unzip terraform_0.11.6_linux_amd64.zip
sudo mv terraform /usr/local/bin/
Confirm terraform binary is accessible: terraform â version
Make sure Terraform works:
$ terraform -v
Terraform v0.11.6
Downloading and configuring Google Cloud SDK
Now that we have Terraform installed, we need to set up the command line utility to interact with our services on Google Cloud Platform. This will allow us to authenticate to our account on Google Cloud Platform and subsequently use Terraform to manage infrastructure.
Download and install Google Cloud SDK:
$ curl https://sdk.cloud.google.com | bash
Initialize the gcloud environment:
$ gcloud init
Youâll be able to connect your Google account with the gcloud environment by following the on-screen instructions in your browser. If youâre stuck, try checking out the official documentation.
Configuring our Service Account on Google Cloud Platform
Next, I will create a project, set up a service account and set the correct permissions to manage the projectâs resources.
· Create a project and name it whatever youâd like.
· Create a service account and specify the compute admin role.
· Download the generated JSON file and save it to your projectâs directory.
TERRAFORM PROJECT FILE STRUCTURE
Terraform elaborates all the files inside the working directory so it does not matter if everything is contained in a single file or divided into many, although it is convenient to organize the resources in logical groups and split them into different files. Letâs look at how we can do this effectively:
Terraform File Structure
Root level: All tf files are contained in GCP folder
main.tf : This is where I execute terraform from. It contains following sections:
a) Provider section: defines Google as the provider
b) Module section: GCP resources that points to each module in module folder
c) Output section: Displaying outputs after Terrafrom apply
variable.tf: This is where I am defining all my variables that goes into main.tf. Modules variable.tf contains static values such as regions other variables that I am passing through main variables.tf.
Only main variable.tf needs to be modified. I kept it simple so I donât have to modify every variable file under each module.
backend.tf: For capturing and saving tfstate on Google Storage bucket, that I can share with other developers.
Module Folders: I am using three main modules here. Global, ue1 and uc1
* global module has resources that are not region specific such as VPC Network, firewall, rules
* uc1 and ue1 module(s) has resources that are region based. The module creates four sub-networks (two public and two private network) two in each region and creating one instance of each region
Within my directory structure, I have packaged regional-based resources under one module and global resources in a separate module, that way I have to define Variable for a given region, once per module. IAM is another resource that you can define under the global module.
I am running terraform init, plan and apply from main folder where I have defined all GCP resources. I will post another article in the future dedicated to Terraform modules, when & why it is best to use modules and which resources should be packaged in a module.
Main.tf creates all GCP resources that are defined under each module folder. You can see the source is pointing to a relative path with my directory structure. You can also store modules on VCS such as GitHub.
provider "google" {
project = "${var.var_project}"
}
module "vpc" {
source = "../modules/global"
env = "${var.var_env}"
company = "${var.var_company}"
var_uc1_public_subnet = "${var.uc1_public_subnet}"
var_uc1_private_subnet= "${var.uc1_private_subnet}"
var_ue1_public_subnet = "${var.ue1_public_subnet}"
var_ue1_private_subnet= "${var.ue1_private_subnet}"
}
module "uc1" {
source = "../modules/uc1"
network_self_link = "${module.vpc.out_vpc_self_link}"
subnetwork1 = "${module.uc1.uc1_out_public_subnet_name}"
env = "${var.var_env}"
company = "${var.var_company}"
var_uc1_public_subnet = "${var.uc1_public_subnet}"
var_uc1_private_subnet= "${var.uc1_private_subnet}"
}
module "ue1" {
source = "../modules/ue1"
network_self_link = "${module.vpc.out_vpc_self_link}"
subnetwork1 = "${module.ue1.ue1_out_public_subnet_name}"
env = "${var.var_env}"
company = "${var.var_company}"
var_ue1_public_subnet = "${var.ue1_public_subnet}"
var_ue1_private_subnet= "${var.ue1_private_subnet}"
}
######################################################################
# Display Output Public Instance
######################################################################
output "uc1_public_address" { value = "${module.uc1.uc1_pub_address}"}
output "uc1_private_address" { value = "${module.uc1.uc1_pri_address}"}
output "ue1_public_address" { value = "${module.ue1.ue1_pub_address}"}
output "ue1_private_address" { value = "${module.ue1.ue1_pri_address}"}
output "vpc_self_link" { value = "${module.vpc.out_vpc_self_link}"}
Variable.tf
I have used variables for CIDR range for each sub-network, project name. I am also using variables to name resources gcp resources, so that I can easily identify which environment the resource belongs to. All variables are defined in the variables.tf file. Every variable is of type String.
variable "var_project" {
default = "project-name"
}
variable "var_env" {
default = "dev"
}
variable "var_company" {
default = "company-name"
}
variable "uc1_private_subnet" {
default = "10.26.1.0/24"
}
variable "uc1_public_subnet" {
default = "10.26.2.0/24"
}
variable "ue1_private_subnet" {
default = "10.26.3.0/24"
}
variable "ue1_public_subnet" {
default = "10.26.4.0/24"
}
VPC.tf
In the VPC file, I have configured routing-type as global and I have disabled creation of sub-networks (automatically) as GCP creates sub-networks in every region during VPC creation if not disabled. I am also creating and attaching Firewall to the VPC along with firewall rules to allow icmp, tcp and udp ports within internal network and external ssh access to my bastion host.
resource "google_compute_network" "vpc" {
name = "${format("%s","${var.company}-${var.env}-vpc")}"
auto_create_subnetworks = "false"
routing_mode = "GLOBAL"
}
resource "google_compute_firewall" "allow-internal" {
name = "${var.company}-fw-allow-internal"
network = "${google_compute_network.vpc.name}"
allow {
protocol = "icmp"
}
allow {
protocol = "tcp"
ports = ["0-65535"]
}
allow {
protocol = "udp"
ports = ["0-65535"]
}
source_ranges = [
"${var.var_uc1_private_subnet}",
"${var.var_ue1_private_subnet}",
"${var.var_uc1_public_subnet}",
"${var.var_ue1_public_subnet}"
]
}
resource "google_compute_firewall" "allow-http" {
name = "${var.company}-fw-allow-http"
network = "${google_compute_network.vpc.name}"
allow {
protocol = "tcp"
ports = ["80"]
}
target_tags = ["http"]
}
resource "google_compute_firewall" "allow-bastion" {
name = "${var.company}-fw-allow-bastion"
network = "${google_compute_network.vpc.name}"
allow {
protocol = "tcp"
ports = ["22"]
}
target_tags = ["ssh"]
}
Network.tf
In the network.tf file, I have set up public and private sub-network and attaching each sub-network to myVPC. The values for regions are coming out of variables.tf files defined within each sub-module folder (not shown here). I have two network.tf files one each module folder, the difference between the two is region us-east vs us-central.
resource "google_compute_subnetwork" "public_subnet" {
name = "${format("%s","${var.company}-${var.env}-${var.region_map["${var.var_region_name}"]}-pub-net")}"
ip_cidr_range = "${var.var_uc1_public_subnet}"
network = "${var.network_self_link}"
region = "${var.var_region_name}"
}
resource "google_compute_subnetwork" "private_subnet" {
name = "${format("%s","${var.company}-${var.env}-${var.region_map["${var.var_region_name}"]}-pri-net")}"
ip_cidr_range = "${var.var_uc1_private_subnet}"
network = "${var.network_self_link}"
region = "${var.var_region_name}"
}
Instance.tf
Here, I am creating a Ubuntu virtual machine instance and a network interface within the sub-network and then I am attaching the network interface to the instance. I am also running a userdata script which installs nginx as part of the instance creation and boot. I have two interface.tf files one each module folder, the difference between the two is region us-east vs us-central.
resource "google_compute_instance" "default" {
name = "${format("%s","${var.company}-${var.env}-${var.region_map["${var.var_region_name}"]}-instance1")}"
machine_type = "n1-standard-1"
#zone = "${element(var.var_zones, count.index)}"
zone = "${format("%s","${var.var_region_name}-b")}"
tags = ["ssh","http"]
boot_disk {
initialize_params {
image = "centos-7-v20180129"
}
}
labels {
webserver = "true"
}
metadata {
startup-script = <<SCRIPT
apt-get -y update
apt-get -y install nginx
export HOSTNAME=$(hostname | tr -d '\n')
export PRIVATE_IP=$(curl -sf -H 'Metadata-Flavor:Google' http://metadata/computeMetadata/v1/instance/network-interfaces/0/ip | tr -d '\n')
echo "Welcome to $HOSTNAME - $PRIVATE_IP" > /usr/share/nginx/www/index.html
service nginx start
SCRIPT
}
network_interface {
subnetwork = "${google_compute_subnetwork.public_subnet.name}"
access_config {
// Ephemeral IP
}
}
}
$ Terraform init
Initializing modules...
- module.vpc
- module.uc1
- module.ue1
Initializing provider plugins...
The following providers do not have any version constraints in configuration,
so the latest version was installed.
To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.
* provider.google: version = "~> 1.20"
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Terraform Plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
+ module.uc1.google_compute_instance.default
id: <computed>
boot_disk.#: "1"
boot_disk.0.auto_delete: "true"
boot_disk.0.device_name: <computed>
boot_disk.0.disk_encryption_key_sha256: <computed>
boot_disk.0.initialize_params.#: "1"
boot_disk.0.initialize_params.0.image: "debian-9-stretch-v20180227"
boot_disk.0.initialize_params.0.size: <computed>
boot_disk.0.initialize_params.0.type: <computed>
can_ip_forward: "false"
cpu_platform: <computed>
create_timeout: "4"
deletion_protection: "false"
guest_accelerator.#: <computed>
instance_id: <computed>
label_fingerprint: <computed>
labels.%: "1"
labels.webserver: "true"
machine_type: "n1-standard-1"
metadata_fingerprint: <computed>
name: "company-dev-uc1-instance1"
network_interface.#: "1"
network_interface.0.access_config.#: "1"
network_interface.0.access_config.0.assigned_nat_ip: <computed>
network_interface.0.access_config.0.nat_ip: <computed>
network_interface.0.access_config.0.network_tier: <computed>
network_interface.0.address: <computed>
network_interface.0.name: <computed>
network_interface.0.network_ip: <computed>
network_interface.0.subnetwork: "company-dev-uc1-pub-net"
network_interface.0.subnetwork_project: <computed>
project: <computed>
scheduling.#: <computed>
self_link: <computed>
tags.#: "2"
tags.2541227442: "http"
tags.4002270276: "ssh"
tags_fingerprint: <computed>
zone: "us-central1-a"
+ module.uc1.google_compute_subnetwork.private_subnet
id: <computed>
creation_timestamp: <computed>
fingerprint: <computed>
gateway_address: <computed>
ip_cidr_range: "10.26.1.0/24"
name: "company-dev-uc1-pri-net"
network: "${var.network_self_link}"
project: <computed>
region: "us-central1"
secondary_ip_range.#: <computed>
self_link: <computed>
+ module.uc1.google_compute_subnetwork.public_subnet
id: <computed>
creation_timestamp: <computed>
fingerprint: <computed>
gateway_address: <computed>
ip_cidr_range: "10.26.2.0/24"
name: "company-dev-uc1-pub-net"
network: "${var.network_self_link}"
project: <computed>
region: "us-central1"
secondary_ip_range.#: <computed>
self_link: <computed>
+ module.ue1.google_compute_instance.default
id: <computed>
boot_disk.#: "1"
boot_disk.0.auto_delete: "true"
boot_disk.0.device_name: <computed>
boot_disk.0.disk_encryption_key_sha256: <computed>
boot_disk.0.initialize_params.#: "1"
boot_disk.0.initialize_params.0.image: "centos-7-v20180129"
boot_disk.0.initialize_params.0.size: <computed>
boot_disk.0.initialize_params.0.type: <computed>
can_ip_forward: "false"
cpu_platform: <computed>
create_timeout: "4"
deletion_protection: "false"
guest_accelerator.#: <computed>
instance_id: <computed>
label_fingerprint: <computed>
labels.%: "1"
labels.webserver: "true"
machine_type: "n1-standard-1"
metadata_fingerprint: <computed>
name: "company-dev-ue1-instance1"
network_interface.#: "1"
network_interface.0.access_config.#: "1"
network_interface.0.access_config.0.assigned_nat_ip: <computed>
network_interface.0.access_config.0.nat_ip: <computed>
network_interface.0.access_config.0.network_tier: <computed>
network_interface.0.address: <computed>
network_interface.0.name: <computed>
network_interface.0.network_ip: <computed>
network_interface.0.subnetwork: "company-dev-ue1-pub-net"
network_interface.0.subnetwork_project: <computed>
project: <computed>
scheduling.#: <computed>
self_link: <computed>
tags.#: "2"
tags.2541227442: "http"
tags.4002270276: "ssh"
tags_fingerprint: <computed>
zone: "us-east1-b"
+ module.ue1.google_compute_subnetwork.private_subnet
id: <computed>
creation_timestamp: <computed>
fingerprint: <computed>
gateway_address: <computed>
ip_cidr_range: "10.26.3.0/24"
name: "company-dev-ue1-pri-net"
network: "${var.network_self_link}"
project: <computed>
region: "us-east1"
secondary_ip_range.#: <computed>
self_link: <computed>
+ module.ue1.google_compute_subnetwork.public_subnet
id: <computed>
creation_timestamp: <computed>
fingerprint: <computed>
gateway_address: <computed>
ip_cidr_range: "10.26.4.0/24"
name: "company-dev-ue1-pub-net"
network: "${var.network_self_link}"
project: <computed>
region: "us-east1"
secondary_ip_range.#: <computed>
self_link: <computed>
+ module.vpc.google_compute_firewall.allow-bastion
id: <computed>
allow.#: "1"
allow.803338340.ports.#: "1"
allow.803338340.ports.0: "22"
allow.803338340.protocol: "tcp"
creation_timestamp: <computed>
destination_ranges.#: <computed>
direction: <computed>
name: "company-fw-allow-bastion"
network: "company-dev-vpc"
priority: "1000"
project: <computed>
self_link: <computed>
source_ranges.#: <computed>
target_tags.#: "1"
target_tags.4002270276: "ssh"
+ module.vpc.google_compute_firewall.allow-http
id: <computed>
allow.#: "1"
allow.272637744.ports.#: "1"
allow.272637744.ports.0: "80"
allow.272637744.protocol: "tcp"
creation_timestamp: <computed>
destination_ranges.#: <computed>
direction: <computed>
name: "company-fw-allow-http"
network: "company-dev-vpc"
priority: "1000"
project: <computed>
self_link: <computed>
source_ranges.#: <computed>
target_tags.#: "1"
target_tags.2541227442: "http"
+ module.vpc.google_compute_firewall.allow-internal
id: <computed>
allow.#: "3"
allow.1367131964.ports.#: "0"
allow.1367131964.protocol: "icmp"
allow.2250996047.ports.#: "1"
allow.2250996047.ports.0: "0-65535"
allow.2250996047.protocol: "tcp"
allow.884285603.ports.#: "1"
allow.884285603.ports.0: "0-65535"
allow.884285603.protocol: "udp"
creation_timestamp: <computed>
destination_ranges.#: <computed>
direction: <computed>
name: "company-fw-allow-internal"
network: "company-dev-vpc"
priority: "1000"
project: <computed>
self_link: <computed>
source_ranges.#: "4"
source_ranges.1778211439: "10.26.2.0/24"
source_ranges.2728495562: "10.26.3.0/24"
source_ranges.3215243634: "10.26.4.0/24"
source_ranges.4016646337: "10.26.1.0/24"
+ module.vpc.google_compute_network.vpc
id: <computed>
auto_create_subnetworks: "false"
gateway_ipv4: <computed>
name: "company-dev-vpc"
project: <computed>
routing_mode: "GLOBAL"
self_link: <computed>
Plan: 10 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------
Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.
Terraform apply Outputs
Output from Terraform apply
Google Console Output Screenshots:
GCP Network
GCP Instance dashboard
NGINX installed using metadata
Terraform destroy output
Terraform is great because of its vibrant open source community, its simple module paradigm & the fact that itâs cloud agnostic. However, there are limitations with their open source tool.
Terraform Enterprise (TFE) edition provides a host of additional features and functionality that solves open source issues and enable enterprises to effectively scale Terraform implementations across the organization â unlocking infrastructure bottlenecks and freeing up developers to innovate, rather than configure servers!
#terraform #gcp #googlecloud #cloudcomputing
1594162500
A multi-cloud approach is nothing but leveraging two or more cloud platforms for meeting the various business requirements of an enterprise. The multi-cloud IT environment incorporates different clouds from multiple vendors and negates the dependence on a single public cloud service provider. Thus enterprises can choose specific services from multiple public clouds and reap the benefits of each.
Given its affordability and agility, most enterprises opt for a multi-cloud approach in cloud computing now. A 2018 survey on the public cloud services market points out that 81% of the respondents use services from two or more providers. Subsequently, the cloud computing services market has reported incredible growth in recent times. The worldwide public cloud services market is all set to reach $500 billion in the next four years, according to IDC.
By choosing multi-cloud solutions strategically, enterprises can optimize the benefits of cloud computing and aim for some key competitive advantages. They can avoid the lengthy and cumbersome processes involved in buying, installing and testing high-priced systems. The IaaS and PaaS solutions have become a windfall for the enterpriseâs budget as it does not incur huge up-front capital expenditure.
However, cost optimization is still a challenge while facilitating a multi-cloud environment and a large number of enterprises end up overpaying with or without realizing it. The below-mentioned tips would help you ensure the money is spent wisely on cloud computing services.
Most organizations tend to get wrong with simple things which turn out to be the root cause for needless spending and resource wastage. The first step to cost optimization in your cloud strategy is to identify underutilized resources that you have been paying for.
Enterprises often continue to pay for resources that have been purchased earlier but are no longer useful. Identifying such unused and unattached resources and deactivating it on a regular basis brings you one step closer to cost optimization. If needed, you can deploy automated cloud management tools that are largely helpful in providing the analytics needed to optimize the cloud spending and cut costs on an ongoing basis.
Another key cost optimization strategy is to identify the idle computing instances and consolidate them into fewer instances. An idle computing instance may require a CPU utilization level of 1-5%, but you may be billed by the service provider for 100% for the same instance.
Every enterprise will have such non-production instances that constitute unnecessary storage space and lead to overpaying. Re-evaluating your resource allocations regularly and removing unnecessary storage may help you save money significantly. Resource allocation is not only a matter of CPU and memory but also it is linked to the storage, network, and various other factors.
The key to efficient cost reduction in cloud computing technology lies in proactive monitoring. A comprehensive view of the cloud usage helps enterprises to monitor and minimize unnecessary spending. You can make use of various mechanisms for monitoring computing demand.
For instance, you can use a heatmap to understand the highs and lows in computing visually. This heat map indicates the start and stop times which in turn lead to reduced costs. You can also deploy automated tools that help organizations to schedule instances to start and stop. By following a heatmap, you can understand whether it is safe to shut down servers on holidays or weekends.
#cloud computing services #all #hybrid cloud #cloud #multi-cloud strategy #cloud spend #multi-cloud spending #multi cloud adoption #why multi cloud #multi cloud trends #multi cloud companies #multi cloud research #multi cloud market
1597833840
If you looking to learn about Google Cloud in depth or in general with or without any prior knowledge in cloud computing, then you should definitely check this quest out, Link.
Google Could Essentials is an introductory level Quest which is useful to learn about the basic fundamentals of Google Cloud. From writing Cloud Shell commands and deploying my first virtual machine, to running applications on Kubernetes Engine or with load balancing, Google Cloud Essentials is a prime introduction to the platformâs basic features.
Letâs see what was the Quest Outline:
A Tour of Qwiklabs and Google Cloud was the first hands-on lab which basically gives an overview about Google Cloud. There were few questions to answers that will check your understanding about the topic and the rest was about accessing Google cloud console, projects in cloud console, roles and permissions, Cloud Shell and so on.
**Creating a Virtual Machine **was the second lab to create virtual machine and also connect NGINX web server to it. Compute Engine lets one create virtual machine whose resources live in certain regions or zones. NGINX web server is used as load balancer. The job of a load balancer is to distribute workloads across multiple computing resources. Creating these two along with a question would mark the end of the second lab.
#google-cloud-essentials #google #google-cloud #google-cloud-platform #cloud-computing #cloud
1612947267
We strive to provide every customer business with google cloud hosting web services and managed series that are entirely personalized around the commercial and development goals of the company in USA. Businesses that work with us will see a marked improvement in efficiency. Managed Google Cloud Platform services from SISGAIN helps organisations leverage this relative newcomerâs big data and machine learning capabilities via our team of approachable experts. From solution design to in-life support we take the operational burden off dev and product development teams. For more information call us at +18444455767 or email us at hello@sisgain.com
#google cloud platform services #google cloud hosting web services #google cloud web hosting #gcp web hosting #google cloud server hosting #google vps hosting
1620921300
In this Lab, we will configure Cloud Content Delivery Network (Cloud CDN) for a Cloud Storage bucket and verify caching of an image. Cloud CDN uses Googleâs globally distributed edge points of presence to cache HTTP(S) load-balanced content close to our users. Caching content at the edges of Googleâs network provides faster delivery of content to our users while reducing serving costs.
For an up-to-date list of Googleâs Cloud CDN cache sites, see https://cloud.google.com/cdn/docs/locations.
Cloud CDN content can originate from different types of backends:
In this lab, we will configure a Cloud Storage bucket as the backend.
#google-cloud #google-cloud-platform #cloud #cloud storage #cloud cdn
1619081640
The way consumers make their everyday decisions is evolving, as digital ways of working, shopping and communicating have become the new normal. So now itâs more important than ever for companies in the retail sector to prioritise an insights-driven technology strategy and understand whatâs truly important for their customers.
Through its partnerships with some of the worldâs leading retailers and brands, Google Cloud provides solutions that address the retail sectorâs most challenging problems, whether itâs creating flexible demand forecasting models to optimize inventory or transforming e-commerce using AI-powered apps. Over the past few years, weâve been observing and analyzing the many facets of changing consumer behaviour. We are here to support retailers and brands as they transform their businesses to adapt to this new landscape.
Featuring consumer research and insights from your peers, Google Cloudâs Retail & Consumer Goods Summit will offer candid conversations to help you solve your challenges. Weâll be joined by industry innovators, including Carrefour Belgium and LâOrĂ©al, whoâll discuss the future of retail and consumer goods.
#cloud native #google cloud platform #google cloud in europe #cloud #google cloud