Hosts Carter Morgan and Anthony Bushong are in the studio this week! We’re talking about Prometheus with guests Lee Yanco and Ashish Kumar and learning about the build process for Google Cloud’s Managed Service for Prometheus and how Home Depot uses this tool to power their business. To begin with, Lee helps us understand what Managed Service for Prometheus is. Prometheus, a popular monitoring solution for Kubernetes, lets you know that your project is up and running and in the event of a failure, Prometheus lets you know what happened. But as Kubernetes projects scale and spread across the globe, Prometheus becomes a challenge to manage, and that’s where Google Cloud’s Managed Service for Prometheus comes in. Lee describes why Prometheus is so great for Kubernetes, and Ashish talks about CNCF’s involvement helps open source tools integrate easily. With the help of Monarch, Google’s Managed Service stands above the competition, and Lee explains what Monarch is and how it works with Prometheus to benefit users. Ashish talks about Home Depot’s use of Google Cloud and the Managed Service for Prometheus, and how Home Depot’s multiple data centers make data monitoring both trickier and more important. With Google Cloud, Home Depot is able to easily ensure everything is healthy and running across data centers, around the world, at an immense scale. He describes how Home Depot uses Managed Service for Prometheus in each of these data center environments from the point of view of a developer and talks about how easy Prometheus and the Managed Service are to integrate and use. Lee and Ashish wrap up the show with a look at how Home Depot and Google have worked together to create and adjust tools for increased efficiency. In the future, tighter integration into the rest of Google Cloud’s suite of products is the focus. Lee Yanco Lee Yanco is the Product Management lead for Google Cloud Managed Service for Prometheus. He also works on Monarch, Google’s planet-scale in-memory time series database, and on Cloud Monitoring’s Kubernetes observability experience. Ashish Kumar Ashish Kumar is Senior Manager for Site Reliability and Production Engineering for The Home Depot. Cool things of the week Cloud Next registration is open site Introducing Parallel Steps for Workflows: Speed up workflow executions by running steps concurrently blog How to think about threat detection in the cloud blog GCP Podcast Episode 218: Chronicle Security with Dr. Anton Chuvakin and Ansh Patniak podcast Interview Prometheus site PromQL site Google Cloud Managed Service for Prometheus docs Kubernetes site CNCF site Monarch: Google’s Planet-Scale In-Memory Time Series Database research Cloud Monitoring site Cloud Logging site Google Cloud’s operations suite site What’s something cool you’re working on? Carter is focusing on getting organized, managing overwhelm, and comedy festivals. Anthony is testing a few new exciting features, working with build provenance in Cloud Build, jobs and network file systems in Cloud Run. Hosts Carter Morgan and Anthony Bushong
Stephanie Wong and Carter Morgan are back this week learning about Google’s Distributed Cloud Edge for telcos with guests Krishna Garimella and DP Ayyadevara. Launched last year, Google Distributed Cloud Edge has benefited companies across many industries. Today, our guests are here to elaborate on how telecommunications companies specifically are leveraging this powerful tool. Because telcos deliver essential services, they tend to create detailed plans for their infrastructure in advance and stick with this setup for many years, DP tells us. Identifying the right tools for their project is vital, and Google has created and improved on many services to aid the telecommunications sector. Contact Center AI, for example, helps with customer service needs. Specifically, our guests elaborate on the modernization of telco networks through managed infrastructure offerings. We learn more about Google Distributed Cloud Edge and the managed hardware and software stack that’s included. Container as a service for optimal network function is the first focus of Google in supporting telcos companies with Distributed Cloud and has been used for 5G rollouts. Google has been working behind the scenes to make Kubernetes more telco friendly as well, so that projects are more portable, scalable, and able to leverage Kubernetes benefits easily. Krishna gives us some great real-life examples of telecommunications companies using GDC Edge in areas like virtual broadband networks. In order to dedicate maximum resources to customer workloads, the team chose to keep the Kubernetes control plane in the cloud while worker nodes are at the edge. With superior security protection, minimal service disruption, and more, GDC Edge boasts fleet management as a core capability. In order to continue satisfying telco’s needs, Google collaborates with many businesses to grow with changing customer needs. Krishna Garimella Krishna is a technology evangelist who has worked with service providers across the globe in the network and media areas. DP Ayyadevara DP is the Product Group Product Manager leading Telco Network Modernization products and solutions at Google Cloud. Cool things of the week Cloud TPU v4 records fastest training times on five MLPerf 2.0 benchmarks blog Show off your cloud skills by completing the #GoogleClout weekly challenge blog Interview Distributed Cloud site Distributed Cloud Edge Documentation docs Contact Center AI site Kubernetes site Anthos site Nephio site BigQuery site Vertex AI site What’s something cool you’re working on? Carter made a test for a video recap version of the recent pi episode. Stephanie recently made a pi video as well and is working on an Alphafold video and the Cloud client library new reference docs homepage rollout. Hosts Carter Morgan and Stephanie Wong
Your hosts Max Saltonstall and Carter Morgan talk with guests Cody Ault and Jo-Anne Bourne of Veeam. Veeam is revolutionizing the data space by minimizing data loss impacts and project downtime with easy backups and user-friendly disaster recovery solutions. As a software company, Veeam is able to stay flexible with its solutions, helping customers keep any project safe. Cody explains what is meant by disaster recovery and how different systems might require different levels of fail-safe protection. Jo-Anne talks about the financial cost of downtime and how Veeam can help save money by planning for and preventing downtime. Veeam backup and replication is the main offering that can be customized depending on workloads, Cody tells us. He gives examples of how this works for different types of projects. Businesses can easily make plans for recovery and data backups then implement them with the help of Veeam. Cody talks about cloud migration and how Veeam can streamline this process with its replication services, and Jo-Anne emphasizes the importance of these recovery processes for data in the cloud. The journey from fledgling Veeam to their current suite of offerings was an interesting one, and Cody talks about this evolution, starting with the simple VM backups of version 5. As companies have brought new recovery challenges, Veeam has grown to provide these services. Their partnership with Google has grown as well, as they continue to leverage Google offerings and support Google Cloud customers. We hear examples of Veeam customers and how they use the software, and Cody tells us a little about the future of Veeam. Cody Ault Cody has been at Veeam for over 11 years in various roles and departments including Technical Lead for US Support team, Advisory Architect for Presales Solutions Architect and Staff Solutions Architect for Product Management Alliances. He has acted as the performance, databases, security, and monitoring specialist for North America for the Presales team and has helped develop the Veeam Design Methodology and Architecture Documentation template. Cody is currently working with the Alliances team focusing on Google Cloud, Kubernetes and Red Hat. Jo-Anne Bourne Jo-Anne is a Partner Marketing Strategist who works with global companies to support them in positioning company products with their customer base. She is effective in developing strategic partnerships with International Resellers, CCaaS partners, Systems Integrators, OEM partners and ISV partnerships like Amazon, Microsoft, Avaya, Cisco, Five9, BT to develop strategies to enable sales teams to generate significant revenue and in turn, build profitability for the company. Jo-Anne is a brand steward successful in using analytics to create results-driven campaigns that increase brand awareness, generate sales leads, improve customer engagement and strengthen partner relationships. Cool things of the week Announcing general availability of reCAPTCHA Enterprise password leak detection blog Cloud Podcasts site Bio-pharma organizations can now leverage the groundbreaking protein folding system, AlphaFold, with Vertex AI blog Interview Veeam site Veeam for Google Cloud site VeeamHub site Google Cloud VMware Engine site Cloud SQL site Kasten site Kubernetes site GKE site What’s something cool you’re working on? Carter is working on the new Cloud Podcasts website. Max is working on research papers about how we built and deployed Google’s Zero Trust system for employees, BeyondCorp. Kelci is working on creating a series of blog posts highlighting the benefits of having access to public data sets embedded within BigQuery. Hosts Carter Morgan and Max Saltonstall
This week on the GCP Podcast, Carter Morgan and Max Saltonstall are joined by Amit Kumar and Vasili Triant. Our guests are here to talk about new features in Contact Center AI. Amit starts the show helping us understand what Contact Center as a Service is and what makes this unified platform so useful for enterprise companies. The scalability helps keep costs down and overall satisfaction up while leveraging advances in cloud. UJET and Google Cloud have worked together to bring this AI advancement, and our guests describe the partnership and evolution CCAI. CCAI has streamlined the Contact Center as a Service space, helping businesses work efficiently and while putting an emphasis on positive experiences for the end customer. CCAI users can use the platform straight out of the box or customize it to build specific experiences with tools like Dialogflow. Amit further describes the tools available like Interactive Voice Response and for which circumstances each tool would be most usef...
Carter Morgan and Brian Dorsey are working on their math skills today with guests Emma Haruka Iwao and Sara Ford. What kind of computing power does it take to break the world record for pi computations? Emma and Sara are here to tell us. Emma tells us how she started with pi and how she and Sara came to work together to break the record. In 2019, Emma was on the show with her previous world record, and with the advancements in technology and Google products since, she knew she could do even more this year. Her 100 trillion digit goal wasn’t enough to scare people away, and Sara, along with other partners, joined Emma on the pi computation journey. Together, Sara and Emma talk about the hardware required, building the algorithm, how it’s run, and where the data is stored. Running on a personal computer was cheaper and easier than a super computer, and Emma explains why. Performing these immense calculations can also help illustrate just how far computers have come. The storage required for this project was immense, and Emma tells us how they worked around some of the storage limitations. We hear more about Ycruncher and how it was used to help with calculations. Our guests talk about how things might change for computing and specifically for pi computations in the next few years, and Sara tells us about the storage journey from the perspective of a mathematician, and gives us some interesting facts about the algorithms involved, and we learn how world records are verified. Emma Haruka Iwao Emma is a developer advocate for Google Cloud Platform, focusing on application developers’ experience and high performance computing. She has been a C++ developer for 15 years and worked on embedded systems and the Chromium Project. Emma is passionate about learning and explaining the most fundamental technologies such as operating systems, distributed systems, and internet protocols. Besides software engineering, she likes games, traveling, and eating delicious food. Sara Ford Sara Ford is a Developer Advocate on Google Cloud focusing on Serverless. She received a Masters degree in Human Factors (UX) because she wants to make dev tools more usable. Her lifelong dream is to be a 97-year old weightlifter so she can be featured on the local news. Cool things of the week New Cloud Podcasts Website site Even more pi in the sky: Calculating 100 trillion digits of pi on Google Cloud blog Interview GCP Podcast Episode 167: World Pi Day with Emma Haruka Iwao podcast pi.delivery 100 Trillion Digits site pi.delivery Github site A History of Pi book Distributing historically linear calculations of Pi with serverless video Ycruncher site Compute Engine site Cloud Functions site SRE site Terraform site What’s something cool you’re working on? Carter and Brian are working on a new season of VM End to End Hosts Carter Morgan and Brian Dorsey
On the podcast this week, guest Joe Daly tells Stephanie Wong, Mark “Money” Mirchandani, and our listeners all about FinOps principles and how they’re helping companies take advantage of the cloud while saving their bottom lines. He describes FinOps as financial DevOps, making financial decisions in an effective and optimized way. With his experience in finance and tax accounting, Joe has developed a special knack for navigating the sometimes confusing world of cloud finance policies, and his contributions to the FinOps Foundation have been many. For starters, collaboration with various business departments is important for developing a plan that leverages the benefits of the cloud but keeps the company using resources wisely, Joe explains. He talks about the FinOps Foundation and their focus on creating community for knowledge sharing. By fostering collaboration among different company roles and promoting financial education, companies are better able to determine financial goals while making sure each facet of the company reaps all the benefits of cloud participation. Following the FinOps cycle is the easiest way for community members to get started. The three steps, Joe tells us, are inform, optimize, and operate. The inform phase involves clarity in spending so teams understand how much money is being spent. In the optimize phase, benefits of spending are matched with expenditures to ensure resources are being used to their full potential. Finally, in the operate phase, engineers and finance managers come together to understand why solutions were chosen and understand if these tools are offering the right answers for the company. Every company is different but the sooner it’s possible to start the FinOps journey the easier it will be to maintain in the future. Joe gives us examples of how companies are using the principles for successful strategies and the challenges that some of them have faced. The Foundation has monthly summits that offer perspectives from these companies as well as partner presentations. The FinOpsX conference is coming up soon as well. To wrap up, Joe offers other resources from the FinOps Foundation, including his podcast. Joe Daly Joe set up two FinOps teams at Fortune 100 companies. He joined the FinOps Foundation and has been setting up the ambassador program, supporting meetup groups, and producing FinOpsPod. Cool things of the week AlloyDB for PostgreSQL under the hood: Columnar engine blog GCP Podcast Episode 304: AlloyDB with Sandy Ghai and Gurmeet “GG” Goindi podcast How Google Cloud is helping more startups build, grow, and scale their businesses blog Automate identity document processing with Document AI blog Interview FinOps Foundation site FinOpsX site FinOpsPod podcast Cloud FinOps: The Secret To Unlocking The Economic Potential Of Public Cloud whitepaper Maximize Business Value with Cloud FinOps whitepaper Unlocking the value of cloud FinOps with a new operating model whitepaper Hosts Stephanie Wong and Mark Mirchandani
Stephanie Wong and Lorin Price welcome guests Zach Seils and Manasa Chalasani to talk about networking and the newly released Network Analyzer. Google Cloud’s Network Intelligence Center is described as a one-stop shop that simplifies network monitoring, troubleshooting, workload expansion, security, and more. Manasa tells us about the four modules of Network Intelligence Center and how they work together. As part of Network Intelligence Center, the new Network Analyzer monitors and proactively runs tests and detects issues on the network automatically, taking the guesswork out of network troubleshooting. Network Analyzer checks the entire network ecosystem, finding any connectivity issues and extrapolating them to other similar situations as well. Zach tells us more about the specific features of Analyzer, like its ability to check for overlapping or shadowed routes and validating network configurations in relation to any managed services being used. Zach walks us through the set up of Network Analyzer and how to navigate results. Manasa expands on the development of Network Analyzer, including how customer feedback really shaped the project, and we hear about challenges along the way. Through examples, Zach describes different types of Analyzer customers and how they’re using the product. More analyzers will be available soon, and the team is open to suggestions for future projects. Zach Seils Zach Seils is a Networking Specialist with Google Cloud, where he works with customers to accelerate their adoption of cloud networking. Manasa Chalasani Manasa is a Product Manager on the Google Cloud Networking team with a focus on network observability. Cool things of the week The new Google Cloud region in Columbus, Ohio is open blog Assembling and managing distributed applications using Google Cloud Networking solutions blog Interview Network Intelligence Center site Network Analyzer Documentation docs Introducing Network Analyzer: One stop shop to detect service and network issues blog CloudSQL site GKE site Cloud Monitoring site Contact the Network Analyzer team email GCP Podcast Episode 270: Traditional vs. Service Networking with Ryan Przybyl podcast What’s something cool you’re working on? Lorin is working on a new video series called Concepts of Networking on the Networking End to End Playlist Hosts Stephanie Wong and Lorin Price
Kaslin Fields and Mark Mirchandani learn how GKE manages their releases and how customers can take advantage of the GKE release channels for smooth transitions. Guests Abdelfettah Sghiouar and Kobi Magnezi of the Google Cloud GKE team are here to explain. With releases every four months or so, Kobi tells us that Kubernetes requires two pieces to be managed with each release: the control plane and the nodes. Both are managed for the customer in GKE. The new addition of release channels allows flexibility with release updating so customers can adjust to their specific project needs. Each channel offers a different updating mix and speed, and clients choose the channel that’s right for their project. The idea for release channels isn’t a new one, Kobi explains. In fact, Google’s frequent project releases, while keeping things secure and running well, also can be customized by choosing from an assortment of channels in other Google offerings like Chrome. Our guests talk us through the process of releasing through channels and how each release marinates in the Rapid channel to be sure the version is supported and secure before being pushed to customers through other channels. We hear how release channels differ from no-channel releases, the benefits of specialized channels, and recommendations for customers as far as which channels to use with different development environments. Abdel describes real-world use cases for the Rapid, Regular, and Stable channels, the Surge Upgrade feature, and how GKE notifications with Pub/Sub helps in the updating process. Kobi talks about maintenance and exclusion windows to help customers further customize when and how their projects will update. Kobi and Abdel wrap up with a discussion of the future of GKE release channels. Kobi Magnezi Kobi is the Product Manager for GKE at Google Cloud. Abdelfettah Sghiouar Abdel is a Cloud Dev Advocate with a focus on Cloud native, GKE, and Service Mesh technologies. Cool things of the week GKE Essentials videos KubeCon EU 2023 site KubeCon Call for Proposals site Kubernetes 1.24: Stargazer site GCP Podcast Episode 292: Pulumi and Kubernetes Releases with Kat Cosgrove podcast Optimize and scale your startup on Google Cloud: Introducing the Build Series blog Interview Kubernetes site GKE site Autoscaling with GKE: Overview and pods video GKE release schedule dcos Release channels docs Upgrade-scope maintenance windows docs Configure cluster notifications for third-party services docs Cluster notifications docs Pub/Sub site Agones site What’s something cool you’re working on? Kaslin is working on KubeCon and new episodes of GKE Essentials. Hosts Mark Mirchandani and Kaslin Fields
AlloyDB for PostgreSQL has launched and hosts Mark Mirchandani and Gabe Weiss are here this week to talk about it with guests Sandy Ghai and Gurmeet Goindi. This fully managed, Postgres compatible database for enterprise use combines the power of Google Cloud and the best features of Postgres for superior data management. AlloyDB began years ago as a solution to help manage huge data migrations to the cloud. Customers needed a way to take advantage of the benefits of cloud, modernizing their databases as they migrated in an easy, flexible, and scalable way. Databases had to maintain performance and availability while offering enterprise customers optimal security features and more. We learn why PostgreSQL is important in the equation and how AlloyDB is built with Google scaling abilities and ML while supporting open source compatibility. We talk about data analytics workloads and how AlloyDB handles in-the-moment analytics needs. Our guests describe and compare different database offerings at Google, emphasizing the solutions that set AlloyDB apart. We chat about the types of projects each database is best suited for and how AlloyDB fits into the Google database portfolio. We hear examples of customers moving to AlloyDB and how they’re using this new service. Clients have been leveraging the embedded ML features for better data management. Sandy Ghai Sandy is a product manager on GCP Databases and has been working on the AlloyDB team since the beginning. Gurmeet “GG” Goindi GG is a product manager at Google, where he focuses on databases and attends meetings. Prior to joining Google, GG led product management for Exadata at Oracle, where he also worked on databases and attended meetings. GG has had various product management, management, and engineering roles for the last 20 years in Silicon Valley, but his favorite meetings have been at Google. He holds an MBA from the University of Chicago Booth School of Business. Cool things of the week Google I/O site Introducing “Visualizing Google Cloud: 101 Illustrated References for Cloud Engineers and Architects” blog Meet the people of Google Cloud: Priyanka Vergadia, bringing Google Cloud to life in illustrations blog Working with Remote Functions docs Interview AlloyDB for PostgreSQL site AlloyDB Documentation docs AlloyDB for PostgreSQL under the hood: Intelligent, database-aware storage blog PostgreSQL site Introducing AlloyDB for PostgreSQL video Introducing AlloyDB, a PostgreSQL-compatible cloud database service video BigQuery site Spanner site CloudSQL site What’s something cool you’re working on? Gabe is working on some exciting content to support landing the AlloyDB launch. Hosts Mark Mirchandani and Gabe Weiss
This week, Googler Denise Pearl and NGIS Executive Director Nathan Eaton join hosts Alexandrina Garcia-Verdin and Donna Schut to talk about how modern technology and data collection can significantly enhance environmental protection practices. Denise starts the show with a thorough explanation of geospatial awakening and how Google is making its backend geo services like Google Earth Engine more usable for Google Cloud customers. With better data, easier access, and substantially more cloud compute power, companies are awakening to the possibilities of geospatial driven projects that analyze not just text but photographic data as well. Thousands of satellites collect information about Earth every day, and companies are realizing just how much of this data is available for their own sustainability, geo-centric, and location-based projects. Geospatial, Nathan explains, can help combine layers of text and photo data based on one location for a richer, more robust view of a particular l...