Certification Provider: Juniper Exam Name: Mist AI, Specialist Duration: 90 Minutes Number of questions: 65 Exam Version: Oct. 23, 2021 JN0-450 Exam Official Topics:
Exam Objectives Here’s a high-level view of the skillset required to successfully complete the JNCIS-MistAI certification exam.
Description Wi-Fi Fundamentals
Identify the concepts or functionality of basic Wi-Fi technologies 802.11 PHY Protocols Frequency bands RF basics Modulation and coding Network arbitration and contention WLAN association and roaming WLAN Lifecycle
Mist Architecture and Deployment Identify the concepts of the Mist architecture
General architecture concepts Account organization and subscriptions Configuration objects Organization objects Site objects
Identify the concepts or functionality of WLANs WLANs concepts Security concepts Mist WLANs Policy (WxLAN) Guest portals Wireless intrusion detection and prevention
Demonstrate knowledge of WLAN configuration or troubleshooting Multiple PSK Policy (WxLAN)
Mist Network operations
Identify the components of Mist network operations SLEs/Wireless Assurance Events and insights Radio Resource Management (RRM) Wired Assurance
Demonstrate knowledge of Mist configuration SLE configuration SLE troubleshooting
Marvis AI
Identify the concepts and functionality of the Marvis AI-driven virtual network assistant Reactive troubleshooting Proactive troubleshooting Marvis languages Marvis actions
Demonstrate knowledge of using the Marvis AI-driven virtual network assistant Reactive troubleshooting Proactive troubleshooting Marvis languages Marvis actions
Mist Location-based Services (LBS)
Identify the concepts or methods of LBS Wi-Fi location Virtual Bluetooth Low Energy (BLE) User engagement Asset visibility Proximity tracing
Mist Automation
Identify the concepts and functionality of Mist automation tools RESTful APIs WebSocket API Webhook API Automation and scripting QUESTION 1 In which two layers of the OSI model are WLANs located? (Choose two.)
A. Network B. Transport C. Physical D. Data Link
Answer: C,D
QUESTION 2 What information would be streamed through webhooks? (Choose two.)
A. location coordinates of RFID tags B. alerts C. SLE metrics of clients D. audit logs
Answer: A,D
QUESTION 3 What do 802.11 stations perform to help avoid collisions on the WLAN medium?
A. 802.11 stations detect collisions and set a back-off timer. B. Listen to verify that the medium is free before transmitting. C. Stations only transmit when polled by the access point. D. Transmit on a fixed schedule.
EXAM NUMBER : 2V0-71.21 PRODUCT : Application Modernization, VMware Tanzu, Kubernetes EXAM LANGUAGE : English Associated Certification : VCP-AM 2021 Schedule Exam : $250 USD
Exam Info Duration : 130 min Number of Questions : 55 Passing Score : 300 (scaled) Learn More Format : Single and Multiple Choice, Proctored
EXAM OVERVIEW This exam tests a candidate’s expertise with VMware Tanzu Standard Edition including vSphere with Tanzu, Tanzu Kubernetes Grid, and Tanzu Mission Control. The exam also tests fundamental cloud native skills including containerization, Kubernetes, and application modernization.
PREPARE FOR THE EXAM
Recommended Training VMware vSphere with Tanzu: Deploy and Manage [V7] VMware Tanzu Kubernetes Grid: Install, Configure, Manage [V1.0] VMware Tanzu Mission Control: Management and Operations 2020 Kubernetes Foundations
Additional Resources VCP Community VMware Customer Connect Learning VMware Press VMware Certification Market Place
Exam Details (Last Updated: 06/07/2021) The Associate VMware Application Modernization (2V0-71.21) exam, which leads to VMware Certified Professional – Application Modernization 2021 certification is a 55-item exam, with a passing score of 300 using a scaled method. Candidates are given 130 minutes to complete the exam, which includes adequate time to complete the exam for nonnative English speakers.
Exam Delivery This is a proctored exam delivered through Pearson VUE. For more information, visit the Pearson VUE website.
Certification Information For details and a complete list of requirements and recommendations for attainment, please reference the VMware Education Services – Certification website.
Minimally Qualified Candidate The minimally qualified candidate (MQC) is recommended to have at least 6 to 12 months of experience. The MQC can understand and describe standard features and capabilities of VMware Tanzu Standard Edition components including VMware vSphere with Tanzu, VMware Tanzu Kubernetes Grid, and VMware Tanzu Mission Control. The MQC can explain the VMware Tanzu vision and has hands-on experience with containerization, Kubernetes, and Application Modernization concepts. The MQC can perform common operational and administrative tasks in an environment where Tanzu Standard Edition Components are present, including basic troubleshooting and repairing of TKG clusters. The MQC can identify the primary components and features of Tanzu Mission Control but not limited to cluster lifecycle management, Role Based Access Controls, security policies, cluster inspections and data protection. In addition, the MQC can understand and describe the steps for installation and setup of Tanzu Standard Edition Components.
The MQC works in environments where VMware Tanzu Kubernetes Grid, vSphere with Tanzu and Tanzu Mission Control are used in production. The MQC can perform troubleshooting and repairing of TKG and TKC clusters. The MQC can identify the primary components and features of Tanzu Mission Control but not limited to cluster lifecycle management, Role Based Access Controls, security policies, cluster inspections and data protection. The MQC can perform day-2 operations related to TKG and TKC clusters. A successful candidate has a good grasp of the topics included in the exam blueprint.
Exam Sections VMware exam blueprint sections are now standardized to the seven sections below, some of which may NOT be included in the final exam blueprint depending on the exam objectives. Section 1 – Architecture and Technologies Section 2 – Products and Solutions Section 3 – Planning and Designing Section 4 – Installing, Configuring, and Setup Section 5 – Performance-tuning, Optimization, and Upgrades Section 6 – Troubleshooting and Repairing Section 7 – Administrative and Operational Tasks
If a section does not have testable objectives in this version of the exam, it will be noted below, accordingly. The objective numbering may be referenced in your score report at the end of your testing event for further preparation should a retake of the exam be necessary.
Sections Included in this Exam Section 1 – Architecture and Technologies Objective 1.1 Describe Kubernetes Lifecycle Management Concepts Objective 1.2 Describe Application Modernization Concepts Objective 1.3 Describe Kubernetes logical objects Objective 1.4 Describe Kubernetes Platform and Service Administration Concepts Objective 1.5 Describe Kubernetes Cluster and Application Security Concepts Objective 1.6 Describe Kubernetes Application Deployment and Lifecycle Management Concepts Objective 1.7 Describe Kubernetes networking and storage concepts Section 2 – Products and Solutions Objective 2.1 Understand and Explain Common Administration Requirements for Tanzu Kubernetes Grid Objective 2.2 Understand and Explain Common Administration Requirements for vSphere with Tanzu Objective 2.3 Understand and Explain Common Administration Requirements for Tanzu Mission Control
Section 3 – There are no testable objectives for this section
Section 4 – Installing, Configuring, and Setup Objective 4.1 Know the Tanzu Kubernetes Portfolio 4.1.1 Tanzu Kubernetes solution that should be used 4.1.2 Tanzu Services that should be used 4.1.3 Recommended design of TKGm 4.1.4 Configuration for the recommended design and image management
Objective 4.2 Describe Tanzu Kubernetes Grid 4.2.1 A bootstrap environment and its requirements 4.2.2 Tanzu Kubernetes clusterPlans 4.2.3 TKG management cluster(s) 4.2.4 TKG workload clusters 4.2.5 Supervisor cluster Services and Capabilities 4.2.6 Shared and In-cluster Services 4.2.7 Tanzu Kubernetes Grid Instance 4.2.8 Tanzu Kubernetes Grid Installer 4.2.9 Cluster access and authentication mechanism Objective 4.3 Deploy application to a cluster 4.3.1 Situation that would require a secret 4.3.2 Situation that would require a config map 4.3.3 Logging on Kubernetes 4.3.4 Metrics configuration for an application 4.3.5 Health check probes for an application 4.3.6 Expose an application to outside users 4.3.7 Recommended ways to expose an application 4.3.8 Examples of troubleshooting steps to identify errors 4.3.9 Influence scheduling in a cluster Objective 4.4 Understand Tanzu Kubernetes Cluster Security 4.4.1 Harbor Registry Security Policies 4.4.2 Implement RBAC capabilities 4.4.3 Audit capabilities 4.4.4 Methods to implement pod security 4.4.5 Admission control options to implement on a cluster Objective 4.5 Deploy Tanzu Kubernetes Grid 4.5.1 Prerequisites for TKG installer 4.5.2 Prerequisites required to configure the vSphere environment for TKG 4.5.3 Steps required to create a management cluster 4.5.4 Configure the Tanzu Kubernetes Clusters 4.5.5 Steps required to deploy and manage Tanzu Kubernetes Grid Instances 4.5.6 Configure the ingress control 4.5.7 Different IaaS options supported by TKG Section 5 – There are no testable objectives for this section Section 6 – Troubleshooting and Repairing Objective 6.1 Observe overall cluster health Objective 6.2 Observe component health Objective 6.3 Review cluster inspections Objective 6.4 Review deployment logs
Section 7 – Administrative and Operational Tasks
Objective 7.1 Describe Administrative and Operational Tasks for Tanzu Mission Control 7.1.1 Common administrative and operational Tasks 7.1.2 Configure Tanzu Mission Control service roles 7.1.3 Backup / restore a clusterusing TMC 7.1.4 Clustercompliance scanning 7.1.5 TMC Security 7.1.6 Policy model 7.1.7 Steps to provision a cluster 7.1.8 Describe clusters are scaled and upgraded 7.1.9 Network policies 7.1.10 Describe the agent resources installed 7.1.11 Steps for attaching a cluster to VMware Tanzu Mission Control 7.1.13 Describe cluster inspections 7.1.14 Image registry policies Objective 7.2 Describe Administrative and Operational Tasks for Tanzu Kubernetes Grid 7.2.1 Tanzu Workload clusters in TKG 7.2.2 Cluster configuration & health 7.2.3 Upgrade Tanzu Workload clusters in TKG 7.2.4 Harbor Configuration for Common Container Registry Scenarios 7.2.5 Requirements for leveraging shared services 7.2.6 RBAC and access methods for Tanzu Kubernetes Clusters
Objective 7.3 Describe Administrative and Operational Tasks for Tanzu Kubernetes Grid 7.3.1 Deploy Tanzu Kubernetes Clusters with TKG Service 7.3.2 Requirements for deploying workload management 7.3.3 vSphere Client pages for viewing Supervisor Cluster Configuration & Health 7.3.4 vSphere Client pages for viewing TKG Service and Tanzu Kubernetes Cluster Health 7.3.5 Content libraries for Tanzu Kubernetes Grid Service (for vSphere). 7.3.6 Upgrade Tanzu Kubernetes Clusters 7.3.7 Harbor Configuration for Common Container Registry Scenarios 7.3.8 Requirements for leveraging shared services 7.3.9 RBAC and access methods for Tanzu Kubernetes Clusters 7.3.10 Resource management techniques 7.3.11 vSphere namespace management
QUESTION 1 Which is the correct option to forward logs from Tanzu Kubernetes Grid clusters to Elastic, Kafka, Splunk or an https: endpoint?
A. No action is required. Tanzu Kubernetes Grid automatically forwards the logfiles to syslog. B. Use the kubectl get logs command to forward the logs C. Deploy the fluent bit plugin to vRealize Log Insight D. Deploy Fluent Bit to forward the logs
Answer: D
QUESTION 2 Which option of Tanzu Mission Control is used to manage namespaces within and across clusters?
A. Workspaces B. Workhome C. Workloads D. Workgroup
Answer: A
QUESTION 3 Which step must be taken to enable Kubernetes auditing on a Tanzu Kubernetes cluster?
A. Set the ENABLE_AUDIT_LOGGING variable to ?true? before deploying the cluster B. Run systemctl start auditd && systemctl enable auditd on master node C. Audit is enabled by default on every Tanzu cluster D. Edit /etc/kubernetes/audit-policy.yaml and set ENABLE_AUDIT variable to ?1? on master node
Track Overview A Splunk IT Service Intelligence Certified Admin installs and configures Splunk’s app for IT Service Intelligence (ITSI), including ITSI architecture, deployment planning, service design and implementation, notable events, and developing glass tables and deep dives. This certification demonstrates an individual’s ability to deploy, manage, and utilize Splunk ITSI to monitor mission-critical services.
Candidates who wish to prepare for the Splunk IT Service Intelligence Certified Admin exam are recommended to complete the following course: Implementing Splunk IT Service Intelligence
Candidates for the Splunk IT Service Intelligence Certified Admin exam are also expected to having working knowledge and experience as either Splunk Cloud or Splunk Enterprise Administrators.
Certification Exam Candidates who have completed all of the above requirements may register for the Certification exam via testing partner Pearson VUE. Those who need step-by-step registration assistance should refer to our Exam Registration Tutorial.
The Splunk Certification Exams Study Guide contains important details regarding exam preparation and delivery.
Good luck!
Course Topics ITSI architecture and deployment Installing ITSI Designing Services-discovery and best practices Implementing services and entities Configuring correlation searches and multi KPI alerts Managing aggregation policies and anomaly detection Troubleshooting and maintenance
Course Prerequisites Splunk Fundamentals 1 Splunk Fundamentals 2 Splunk Enterprise System Administration Splunk Enterprise Data Administration
Course Objectives
Module 1 – Introducing ITSI
Identify what ITSI does Describe reasons for using ITSI Examine the ITSI user interface
Module 3 – Managing Notable Events Define key notable events terms and their relationships Describe examples of multi-KPI alerts Describe the notable events workflow Work with notable events
Module 4 – Investigating Issues with Deep Dives Describe deep dive concepts and their relationships Use default deep dives Create and customize new custom deep dives Add and configure swim lanes Custom views Describe effective workflows for troubleshooting
Module 5 – Installing and Configuring ITSI List ITSI hardware recommendations Describe ITSI deployment options Identify ITSI components Describe the installation procedure Identify data input options for ITSI Add custom data to an ITSI deployment
Module 6 – Designing Services Given customer requirements, plan an ITSI implementation Identify site entities
Module 7 – Data Audit and Base Searches Use a data audit to identify service key performance indicators Design base searches
Module 8 – Implementing Services Use a service design to implement services in ITSI
Module 9 – Thresholds and Time Policies Create KPIs with static and adaptive thresholds Use time policies to define flexible thresholds
Module 10 – Entities and Dependencies Using entities in KPI searches Defining dependencies
Module 11 – Correlation and Multi KPI Searches Define new correlation searches Define multi KPI alerts Manage notable event storage
Module 12 – Aggregation Policies Create new aggregation policies Use smart mode
Module 13 – Anomaly Detection Enable anomaly detection Work with generated anomaly events
Module 14 – Access Control Configure user access control Create service level teams
QUESTION 1 After a notable event has been closed, how long will the meta data for that event remain in the KV Store by default?
A. 6 months. B. 9 months. C. 1 year. D. 3 months.
Answer: A
QUESTION 2 Which of the following is a best practice for identifying the most effective services with which to start an iterative ITSI deployment?
A. Only include KPIs if they will be used in multiple services. B. Analyze the business to determine the most critical services. C. Focus on low-level services. D. Define a large number of key services early.
Answer: A
QUESTION 3 When creating a custom deep dive, what color are services/KPIs in maintenance mode within the topology view?
This is a release announcement for Huawei Certification HCIP-Storage V5.0 (English Version).
The Huawei Certification HCIP-Storage V5.0 (English Version) is scheduled for worldwide official release on November 30th, 2020.
1. Overview Huawei Certification is an integral part of the company’s “Platform + Ecosystem” strategy, and it supports the ICT infrastructure featuring “Cloud-Pipe-Device”. It evolves to reflect the latest trends of ICT development. Huawei Certification consists of three categories: ICT Infrastructure Certification, Platform and Service Certification, and ICT Vertical Certification, making it the most extensive technical certification program in the industry.
Huawei offers three levels of certification: Huawei Certified ICT Associate (HCIA), Huawei Certified ICT Professional (HCIP), and Huawei Certified ICT Expert (HCIE).
With its leading talent development system and certification standards, Huawei is committed to developing ICT professionals in the digital era, building a healthy ICT talent ecosystem.
HCIP-Storage V5.0 certification aims to train and certificate senior engineers with capability of planning and design, deployment and implementation, management and O&M, as well as troubleshooting of storage systems.
Passing the HCIP-Storage V5.0 certification will indicate that you: understand and master the knowledge about storage product technologies, principles and application scenarios of storage advanced features, and skills about planning and design, deployment and implementation, management and O&M, as well as troubleshooting of storage systems; which enables you to be competent for enterprise storage system administrator, senior engineer, IT senior technical support and other positions.
2. Released Materials : Following is the list of released materials for HCIP-Storage V5.0 HCIP-Storage V5.0 Training Outline HCIP-Storage V5.0 Exam Outline HCIP-Storage V5.0 Training Materials HCIP-Storage V5.0 Learning Guide HCIP-Storage V5.0 Lab Guides HCIP-Storage V5.0 Timetable HCIP-Storage V5.0 Mock Exam HCIP-Storage V5.0 Version Instructions HCIP-Storage V5.0 Teaching Schedule HCIP-Storage V5.0 Equipment List HCIP-Storage V5.0 Lab Environment Setup Guide HCIP-Storage V5.0 Online Course
3. Training Description 3.1 Training Materials
The training materials of HCIP-Storage V5.0 are as follows: l Storage System Introduction l Flash Storage Technology and Application l Distributed Storage Technology and Application l Storage Design and Implementation l Storage Maintenance and Troubleshooting
Knowledge Point V4.0 V5.0 Change Description
Storage System Introduction 0% 30% Added new product knowledge, including Huawei OceanStor Dorado V6, Huawei OceanStor 100D, and Huawei FusionCube
Flash Storage Technology and Application 30% 30% Optimized the content of the original V4.0 CCSN product and added technical knowledge about Huawei OceanStor all-flash storage products.
Distributed Storage Technology and Application 30% 20% Optimized the content of the original V4.0 CCSS product and added technical knowledge about Huawei OceanStor distributed storage products.
Disaster recovery and backup solution 30% 0% Deleted the A-related content of old products, removed the content that overlaps with the HCIE-Storage course, and integrated the CDPS knowledge points of V4.0 into the HCIE-Storage course.
Storage Design and Implementation 5% 10% Optimized the planning and design contents of the CCSN, CCSS, and CDPS of V4.0 and added the planning and design knowledge of new products.
Storage Maintenance and Troubleshooting 5% 10% Optimized the O&M and troubleshooting contents of the old products of the CCS, CCSS, and CDPS V4.0, and added the O&M and troubleshooting knowledge of new products.
Note: V5.0 focuses on knowledge optimization and content integration of old products. The knowledge points of CCSN, CCSS, and CDPS of V4.0 are integrated into HCIP-Storage V5.0. 3.2 Lab Guides
The lab guides of HCIP-Storage V5.0 are as follows: l Scenario-based Practice of Flash Storage Technology Application l Scenario-based Practice of Distributed Storage Technology Application l Scenario-based Practice of Storage Maintenance and Troubleshooting
Knowledge Point V4.0 V5.0 Change Description Scenario-based Practice of Flash Storage Technology Application 30% 50% Optimized the content of old CCSN products in V4.0 and added the practice for new product technologies.
Scenario-based Practice of Distributed Storage Technology Application 30% 20% Optimized the content of old CCSS products in V4.0 and added the practice for new product technologies.
Scenario-based Practice of DR and Backup Solution 30% 0% Deleted the content of old products, removed the content that overlaps with the HCIE-Storage course, and integrated the CDPS knowledge points of V4.0 into the HCIE-Storage course.
Scenario-based Practice of Storage Maintenance and Troubleshooting 10% 30% Optimized the O&M and troubleshooting contents of the old products of CCSN, CCSS, and CDPS in V4.0. Added the O&M and troubleshooting practice of new products.
Note: V5.0 focuses on knowledge optimization and content integration of old products. The knowledge points of CCSN, CCSS, and CDPS of V4.0 are integrated into HCIP-Storage V5.0.
3.3 Training Duration The HCIP-Storage V5.0 training is designed for 5 working days.
3.4 Lab Environment Please refer to HCIP-Storage V5.0 Equipment List and HCIP-Storage V5.0 Lab Environment Construction Guide in the training materials list for detailed equipment models and lab development guide.
4. Trainers Enablement and Certification Trainer enablement : HALP trainers can build their knowledge by taking Huawei online courses. Trainer certification : HALP trainers can apply for the certificate as a Huawei certified trainer after passing the related certification exams and lab exams and delivering successful trial lectures.
5. Exam Description
The HCIP-Storage V5.0 certification written exam can be taken at Pearson VUE on November 30, 2020. Reservation can be made at https://home.pearsonvue.com/huawei.
The HCIP-Storage V4.0 certification exam will become unavailable from May 30, 2021, please make arrangements for study, training and examination in advance.
6. Release Scope and Target Customers HCIP-Storage V5.0 is scheduled for official release worldwide on November 30, 2020.
The certification targets ICT professionals, Huawei users, Huawei partner, ISV, Huawei engineers, as well as college students.
7. Product Information and Documentation Huawei Certified HCIP-Storage V5.0 is developed by the Enterprise Business Talent Ecosystem Development Dept. The following describes how to obtain the electronic documents of the training product.
QUESTION 1 A colleague suggests using SmartMigration to improve write performance on certain LUNs. This is:
A. Not possible in any situation. B. Possible in all situations. C. Possible in some situations. D. Only a temporary solution.
Answer: C
QUESTION 2 A live gaming platform uses Huawei OceanStor all-flash storage systems with the flash-dedicated FlashLink technology. Which of the following statements about the multi-core technology of FlashLink are correct?
A. vNodes are bound to CPUs to reduce the overheads for scheduling and transmission across CPUs. B. Read/write I/Os are deployed in different groups from other types of I/Os to avoid mutual interference. C. A request is processed by one core until its completion. Cores are lock-free to avoid frequent switchovers among the cores. D. The intelligent multi-core technology drives linear growth of storage performance with CPUs and cores.
Answer: D
QUESTION 3 A service host is connected to a storage device through Fibre Channel SAN. However, the service host cannot detect mapped LUNs after scanning. Which of the following are possible causes?
A. The Fibre Channel module on the service host is faulty. B. No multipathing software is installed on the service host. C. Zones are incorrectly configured. D. The storage network is blocked by the firewall.
Objectives: By the end of the course, you should be able to meet the following objectives: • Describe the vRealize Automation architecture and use cases in cloud environments • Manage vRealize Automation entities on VMware and third-party virtual and cloud infrastructures • Configure and manage Cloud Accounts, Projects, Flavor Mappings, Image Mappings, Network Profiles, Storage Profiles, Volumes, Tags, and Services • Create, modify, manage, and deploy Cloud Templates • Connect to a Kubernetes Cluster and manage namespaces • Customize services and virtual machines with cloudConfig • Configure and manage the Service Broker • Configure and manage ABX actions, custom properties, event broker subscriptions, and vRealize Orchestrator workflows • Integrate with vRealize Orchestrator • Install vRealize Automation with Lifecycle Configuration Manager • Describe Cloud Automation Services (Cloud Assembly and Code Stream). • Integrate Cloud Assembly with Terraform and SaltStack • Use logs and CLI commands to monitor and troubleshoot vRealize Automation
Intended Audience: Experienced system administrators and system integrators responsible for designing and implementing vRealize Automation
Prerequisites: This course requires completion of one of the following courses: • VMware vSphere: Install, Configure, Manage • VMware vSphere: Fast Track
Experience working at the command line is helpful.
This course requires that a student be able to perform the following tasks with no assistance or guidance before enrolling in this course: • Create VMware vCenter Server® objects, such as data centers and folders • Create a virtual machine using a wizard or a template • Modify a virtual machine’s hardware • Migrate a virtual machine with VMware vSphere® vMotion® • Migrate a virtual machine with VMware vSphere® Storage vMotion® • Configure and manage a vSphere DRS cluster with resource pools. • Configure and manage a VMware vSphere® High Availability cluster.
If you cannot perform all of these tasks, VMware recommends that you complete one of the prerequisite courses before enrolling in VMware vRealize Automation: Install, Configure, Manage.
2 vRealize Automation Overview and Architecture • Describe the purpose and functionality of vRealize Automation • Describe the vRealize Automation architecture • Describe the use of VMware Workspace ONE® AccessTM • Describe the relationship between Kubernetes clusters, containers, and vRealize Automation services • Describe CLI commands for vRealize Automation 8 cluster management • Describe Cloud Assembly • Describe Service Broker • Describe Code Stream
3 Installing vRealize Automation • List the different vRealize Automation deployment types • Describe the purpose of vRealize easy installer • Describe the vRealize Automation installation process
4 Authentication and Authorization • Identity the steps involved in integrating Workspace One with Active Directory • Describe features of Workspace One • Describe the user roles available in vRealize Automation • Identify the key tasks performed by each user role • Define custom roles • Configure branding and multitenancy
5 Basic Initial Configuration • Quickly create a basic configuration with a cloud account, cloud zone, project, flavor mapping, and image mapping.
6 VMware Cloud Templates • Configure and deploy a basic cloud template • Create cloud templates that can run on any cloud • Use cloudConfig to run commands, install software, and create users • Use YAML for inputs, variables, and conditional deployments
7 Tags and Storage Configuration • Configure tags • Describe different types of tags • Manage tags • Configure storage profiles • Use tags and storage profiles
8 Integrating NSX-T Data Center • List the capabilities and use cases of NSX-T Data Center • Describe the NSX-T Data Center architecture and components • Integrate NSX-T Data Center with vRealize Automation • List the supported network profiles in vRealize Automation • Use NSX-T Data Center components to design a multitier application Cloud Template • Identify the network and security options available in design canvas • Create and manage on-demand networks and security groups • Configure NSX-T day 2 actions
9 Integrating with Public Clouds • Configure and use VMware Cloud Foundation accounts • Configure and use an AWS cloud account • Configure and use an Azure cloud account • Configure and use a Google Cloud Platform cloud account
10 Using Service Broker for Catalog Management • Release a VMware Cloud Template™ • Define content source and content sharing • Define Service Broker policy enforcement • Use custom forms for catalog items
11 vRealize Automation Extensibility • Describe Extensibility • Use event topics • Create a subscription • Call a vRealize Orchestrator workflow • Create ABX actions
12 Using Code Stream • Introduction to Code Stream • The CI/CD process • Integrate GitLab with Code Stream and Cloud Assembly • Use Code Stream to install software
13 Using Terraform • Integrate Cloud Assembly with Terraform • Use Terraform with a VMware Cloud Template • Use Terraform with Code Stream
14 Using Kubernetes Clusters • Introduction to Kubernetes • Connect to an existing Kubernetes Cluster • Integrate VMware Tanzu™ Grid Integrated Edition • Create a Supervisor Namespace as a catalog item
15 Using SaltStack for Configuration Management • Introduction SaltStack with vRealize Automation • Use SaltStack for software deployment • Use SaltStack for configuration management • Use SaltStack with event-driven orchestration
16 vRealize Automation Troubleshooting and Integration • Location of logs • Using Activity • Monitoring deployment history • Basic troubleshooting • CLI commands • Collecting logs (VAMI console) • Integration with VMware vRealize® Log Insight™ • Integration with vRealize Operations • Migrating vRealize Automation 7.x to 8
This exam is a qualifying exam for the Associate – Data Protection and Management (DCA-DPM) track. This exam focuses on data protection and management in a modern data center environment. It includes fault-tolerant IT infrastructure, data backup, data deduplication, data replication, data archiving, and data migration. It also includes cloud-based data protection techniques, SDDC-specific data protection, and solutions for protecting Big data, Edge, and mobile device data, data security, and data protection management. A limited number of questions refer to product examples that are used in the training to reinforce the knowledge of technologies and concepts.
Dell Technologies provides free practice tests to assess your knowledge in preparation for the exam. Practice tests allow you to become familiar with the topics and question types you will find on the proctored exam. Your results on a practice test offer one indication of how prepared you are for the proctored exam and can highlight topics on which you need to study and train further. A passing score on the practice test does not guarantee a passing score on the certification exam.
Exam Topics Topics likely to be covered on this exam include:
Data Protection Architecture (17%) • Describe the need for data protection and data availability, data center and its core elements • Explain the various data protection and availability solution, key data protection and management activities • Describe the building blocks of a data protection architecture, and related data source components • Understand data protection applications and storage • Explain data security and related management functions Data Protection Solutions (35%) • Describe fault tolerance, compute, network, and storage-based fault tolerance techniques, application-based fault tolerance techniques, and availability zones • Describe backup architecture and backup recovery operations, granularity, topologies, and methods • Describe components of the data deduplication solution: deduplication ratio and factors, granularity, source and target-based deduplication, and deduplication at primary storage • Describe the nature and use of replica and how it is implemented in local and remote replication solutions Data Archiving and Migration (10%) • Describe data archiving architecture, archiving and retrieval operations, and storage tiering • Describe SAN, NAS, hypervisor, and application-based migration
Data Protection for SDDC, Cloud, and Big Data (21%) • Describe the SDDC architecture, benefits, software-defined compute, storage, and networking, and data protection process • Describe the key aspects of cloud computing, cloud services models, and cloud deployments models • Explain cloud-based backup, replication, archiving, and migration • Describe big data analytics, data protection solution for data lake, big data as a service, mobile device backup, and cloud-based mobile device data protection
Securing and Managing the Data Protection Environment (17%) • Describe drivers for data security, GRC, and security threats • Understand various security controls and cyber recovery in a data protection environment • Explain data protection management functions, and management processes that support data protection operations The percentages after each topic above reflects the approximate distribution of the total question set across the exam.
Recommended Training The following curriculum is recommended for candidates preparing to take this exam.
QUESTION 1 Which Dell EMC Storage product family does SRDF support?
A. Unity B. PowerMax C. PowerScale D. PowerStore
Answer: B
QUESTION 2 Which type of virtual machine clone is created from a snapshot of a parent VM?
A. Mirrored Clone B. Full Clone C. Linked Clone D. Snap Clone
Answer: C
QUESTION 3 A backup of 20 GB of data is reduced by a deduplication algorithm to 4 GB of data. What is the deduplication ratio?
About this certification exam Length: 100 minutes Registration fee: $250 Language: English Exam format: Multiple choice and multiple select taken remotely or in person at a test center. Locate a test center near you.
Exam Delivery Method: a. Take the online-proctored exam from a remote location, review the online testing requirements. b. Take the onsite-proctored exam at a testing center: locate a test center near you.
Prerequisites: None Recommended experience: Business analysts with 5+ months of experience using Looker for report development, data visualization, and dashboard best practices.
Looker LookML Developer A Looker LookML Developer works with datasets and LookML and is familiar with SQL and BI tools. LookML Developers are proficient in model management, including troubleshooting existing model errors, implementing data security requirements, creating LookML objects, and maintaining LookML project health. LookML Developers design new LookML dimensions and measures and build Explores for users to answer business questions. LookML developers are skilled at quality management, from implementing version control to assessing code quality to utilizing SQL runner for data validation.
The Looker LookML Developer exam assesses your ability to: Maintain and debug LookML code Build user-friendly Explores Design robust models Define caching policies Understand various datasets and associated schemas Use Looker tools such as Looker IDE, SQL Runner, & LookML Validator
Exam overview Step 1: Understand what’s on the exam
The exam guide contains a complete list of topics that may be included on the exam. Review the exam guide to determine if your knowledge aligns with the topics on the exam.
Certification exam guide A Looker LookML Developer works with datasets and LookML and is familiar with SQL and BI tools. LookML Developers are proficient in model management, including troubleshooting existing model errors, implementing data security requirements, creating LookML objects, and maintaining LookML project health. LookML Developers design new LookML dimensions and measures, and build Explores for users to answer business questions. LookML developers are skilled at quality management, from implementing version control to assessing code quality to utilizing SQL runner for data validation.
Section 1: Model management
1.1 Troubleshoot errors in existing data models. For example:
Determine error sources Apply procedural concepts to resolve errors 1.2 Apply procedural concepts to implement data security requirements. For example:
Implement permissions for users Decide which Looker features to use to implement data security (e.g., access filters, field-level access controls, row-level access controls)
1.3 Analyze data models and business requirements to create LookML objects. For example:
Determine which views and tables to use Determine how to join views into Explores Build project-based needs (e.g., data sources, replication, mock reports provided by clients)
1.4 Maintain the health of LookML projects in a given scenario. For example:
Ensure existing contents are working (e.g., use Content Validator, audit, search for errors) Resolve errors
Section 2: Customization
2.1 Design new LookML dimensions or measures with given requirements. For example:
Translate business requirements (specific metrics) into the appropriate LookML structures (e.g., dimensions, measures, and derived tables) Modify existing project structure to account for new reporting needs Construct SQL statements to use with new dimensions and measures
2.2 Build Explores for users to answer business questions. For example:
Analyze business requirements and determine LookML code implementation to meet requirements (e.g., models, views, join structures) Determine which additional features to use to refine data (e.g., sql_always_where, always_filter, only showing certain fields using hidden: fields:, etc.)
Section 3: Optimization 3.1 Apply procedural concepts to optimize queries and reports for performance. For example:
Determine which solution to use based on performance implications (e.g., Explores, merged results, derived tables) Apply procedural concepts to evaluate the performance of queries and reports Determine which methodology to use based on the query and reports performance sources (e.g., A/B testing, SQL principles)
3.2 Apply procedural concepts to implement persistent derived tables and caching policies based on requirements. For example:
Determine appropriate caching settings based on data warehouse’s update frequency (e.g., hourly, weekly, based on ETL completion) Determine when to use persistent derived tables based on runtime and complexity of Explore queries, and on users’ needs Determine appropriate solutions for improving data availability (e.g., caching query data, persisting tables, combination solutions)
Section 4: Quality 4.1 Implement version control based on given requirements. For example:
Determine appropriate setup for Git branches (e.g., shared branches, pull from remote production) Reconcile merge conflicts with other developer branches (e.g., manage multiple users) Validate the pull request process
4.2 Assess code quality. For example:
Resolve validation errors and warnings Utilize features to increase usability (e.g., descriptions, labels, group labels) Use appropriate coding for project files (e.g., one view per file)
4.3 Utilize SQL Runner for data validation in a given scenario. For example:
Determine why specific queries return results by looking at the generated SQL in SQL Runner Resolve inconsistencies found in the system or analysis (e.g., different results than expected, non-unique primary keys) Optimize SQLs for cost or efficiency based on business requirements
QUESTION 1 Business users report that they are unable to build useful queries because the list of fields in the Explore is too long to find what they need. Which three LookML options should a developer use to curate the business user?s experience? (Choose three.)
A. Add a description parameter to each field with context so that users can search key terms. B. Create a separate project for each business unit containing only the fields that the unit needs. C. Add a group_label parameter to relevant fields to organize them into logical categories. D. Use the hidden parameter to remove irrelevant fields from the Explore. E. Use a derived table to show only the relevant fields.
Answer: A,C,E
QUESTION 2 A user reports that a query run against the orders Explore takes a long time to run. The query includes only fields from the users view. Data for both views is updated in real time. The developer runs the following query in SQL Runner and quickly receives results: SELECT * FROM users. What should the developer do to improve the performance of the query in the Explore?
A. Create an Explore with users as the base table. B. Create a persistent derived table from the user?s query. C. Create an ephemeral derived table from the user?s query. D. Add persist_for: ?24 hours? to the orders Explore.
Answer: A
QUESTION 3 A developer has User Specific Time Zones enabled for a Looker instance, but wants to ensure that queries run in Looker are as performant as they can be. The developer wants to add a datatype: date parameter to all dimension_group definitions without time data in a table-based view, so that time conversions don?t occur for these fields. How can the developer determine to which fields this parameter should be applied through SQL Runner?
A. Open the Explore query in SQL Runner and validate whether removing the conversion from date fields changes the results. B. Open the Explore query in SQL Runner to determine which fields are converted. C. Use the CAST function in SQL Runner to ensure that all underlying fields are dates and conversions are not applied. D. Use the Describe feature in SQL Runner to determine which fields include time data.
Cloud Digital Leader A Cloud Digital Leader can distinguish and evaluate the various capabilities of Google Cloud core products and services and how they can be used to achieve desired business goals. A Cloud Digital Leader is well-versed in basic cloud concepts and can demonstrate a broad application of cloud computing knowledge in a variety of applications.
The Cloud Digital Leader exam is job-role independent. The exam assesses the knowledge and skills of individuals who want or are required to understand the purpose and application of Google Cloud products.
The Cloud Digital Leader exam assesses your knowledge in three areas: General cloud knowledge (approximately 15-25% of the exam) General Google Cloud knowledge (approximately 25-35% of the exam) Google Cloud products and services (approximately 45-55% of the exam)
About this certification exam Length: 90 minutes Registration fee: $99 Language: English, Japanese Exam format: Multiple choice and multiple select
Exam Delivery Method: a. Take the online-proctored exam from a remote location, review the online testing requirements. b. Take the onsite-proctored exam at a testing center, locate a test center near you.
Prerequisites: None Recommended experience: Experience collaborating with technical professionals
Exam overview
Step 1. Understand what’s on the exam The exam contains multiple choice and multiple-select questions, including real-world technical scenarios to assess your ability to identify the appropriate cloud solutions and Google Cloud products.
The exam guide contains a list of topics that may be assessed on the exam. Review the exam guide to determine if your knowledge aligns with the topics on the exam.
Step 2. Expand your knowledge with training
Cloud Digital Leader A Cloud Digital Leader can articulate the capabilities of Google Cloud core products and services and how they benefit organizations. The Cloud Digital Leader can also describe common business use cases and how cloud solutions support an enterprise. The Cloud Digital Leader exam is job-role agnostic and does not require hands-on experience with Google Cloud. Section 1. General cloud knowledge
1.1 Define basic cloud technologies. Considerations include: Differentiate between traditional infrastructure, public cloud, and private cloud Define cloud infrastructure ownership Shared Responsibility Model Essential characteristics of cloud computing
1.2 Differentiate cloud service models. Considerations include: Infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS) Describe the trade-offs between level of management versus flexibility when comparing cloud services Define the trade-offs between costs versus responsibility Appropriate implementation and alignment with given budget and resources
1.3 Identify common cloud procurement financial concepts. Considerations include: Operating expenses (OpEx), capital expenditures (CapEx), and total cost of operations (TCO) Recognize the relationship between OpEx and CapEx related to networking and compute infrastructure Summarize the key cost differentiators between cloud and on-premises environments
Section 2. General Google Cloud knowledge 2.1 Recognize how Google Cloud meets common compliance requirements. Considerations include: Locating current Google Cloud compliance requirements Familiarity with Compliance Reports Manager
2.2 Recognize the main elements of Google Cloud resource hierarchy. Considerations include: Describe the relationship between organization, folders, projects, and resources
2.3 Describe controlling and optimizing Google Cloud costs. Considerations include: Google Cloud billing models and applicability to different service classes Define a consumption-based use model Application of discounts (e.g., flat-rate, committed-use discounts [CUD], sustained-use discounts [SUD])
2.4 Describe Google Cloud’s geographical segmentation strategy. Considerations include: Regions Zones Regional resources Zonal resources Multiregional resources
2.5 Define Google Cloud support options. Considerations include: Distinguish between billing support, technical support, role-based support, and enterprise support Recognize a variety of Service Level Agreement (SLA) applications
Section 3. Google Cloud products and services
3.1 Describe the benefits of Google Cloud virtual machine (VM)-based compute options. Considerations include: Compute Engine, Google Cloud VMware Engine, and Bare Metal Custom versus standard sizing Free, premium, and custom service options Attached storage/disk options Preemptible VMs
3.2 Identify and evaluate container-based compute options. Considerations include: Define the function of a container registry Distinguish between VMs, containers, and Google Kubernetes Engine
3.3 Identify and evaluate serverless compute options. Considerations include: Define the function and use of App Engine, Cloud Functions, and Cloud Run Define rationale for versioning with serverless compute options Cost and performance tradeoffs of scale to zero
3.4 Identify and evaluate multiple data management offerings. Considerations include: Describe the differences and benefits of Google Cloud’s relational and non-relational database offerings (e.g., Cloud SQL, Cloud Spanner, Cloud Bigtable, BigQuery) Describe Google Cloud’s database offerings and how they compare to commercial offerings
3.5 Distinguish between ML/AI offerings. Considerations include: Describe the differences and benefits of Google Cloud’s hardware accelerators (e.g., Vision API, AI Platform, TPUs) Identify when to train your own model, use a Google Cloud pre-trained model, or build on an existing model
3.6 Differentiate between data movement and data pipelines. Considerations include: Describe Google Cloud’s data pipeline offerings (e.g., Pub/Sub, Dataflow, Cloud Data Fusion, BigQuery, Looker) Define data ingestion options
3.7 Apply use cases to a high-level Google Cloud architecture. Considerations include: Define Google Cloud’s offerings around the Software Development Life Cycle (SDLC) Describe Google Cloud’s platform visibility and alerting offerings
3.8 Describe solutions for migrating workloads to Google Cloud. Considerations include: Identify data migration options Differentiate when to use Migrate for Compute Engine versus Migrate for Anthos Distinguish between lift and shift versus application modernization
3.9 Describe networking to on-premises locations. Considerations include: Define Software-Defined WAN (SD-WAN) Determine the best connectivity option based on networking and security requirements Private Google Access
3.10 Define identity and access features. Considerations include: Cloud Identity, Google Cloud Directory Sync, and Identity Access Management (IAM)
QUESTION 1 You are migrating workloads to the cloud. The goal of the migration is to serve customers worldwide as quickly as possible According to local regulations, certain data is required to be stored in a specific geographic area, and it can be served worldwide. You need to design the architecture and deployment for your workloads. What should you do?
A. Select a public cloud provider that is only active in the required geographic area B. Select a private cloud provider that globally replicates data storage for fast data access C. Select a public cloud provider that guarantees data location in the required geographic area D. Select a private cloud provider that is only active in the required geographic area
Answer: D
QUESTION 2 Your organization needs a large amount of extra computing power within the next two weeks. After those two weeks, the need for the additional resources will end. Which is the most cost-effective approach?
A. Use a committed use discount to reserve a very powerful virtual machine B. Purchase one very powerful physical computer C. Start a very powerful virtual machine without using a committed use discount D. Purchase multiple physical computers and scale workload across them
Answer: C
QUESTION 3 Your organization needs to plan its cloud infrastructure expenditures. Which should your organization do?
A. Review cloud resource costs frequently, because costs change often based on use B. Review cloud resource costs annually as part of planning your organization’s overall budget C. If your organization uses only cloud resources, infrastructure costs are no longer part of your overall budget D. Involve fewer people in cloud resource planning than your organization did for on-premises resource planning
Exam ID : HPE0-J68 Exam type : Proctored Exam duration : 1 hour 30 minutes Exam length : 60 questions Passing score : 63% Delivery languages : Japanese, English, Korean Supporting resources : HPE Storage Solutions, Rev. 20.41 Additional study materials : HPE ASE Storage Solutions V4 Study Guide
Ideal candidate Candidates include but are not limited to channel partners, customers, and HPE employees. A candidate will typically have two to three years of experience in a job role focused on interpreting customer requirements to design, install, configure, manage HPE storage and backup solutions. The candidate will be able to demonstrate critical-thinking skills required to design, optimize, deploy, and manage storage solution issues.
Exam contents This exam has 60 questions.
Advice to help you take this exam Complete the training and review all course materials and documents before you take the exam. Use HPE Press study guides and additional reference materials; study guides, practice tests, and HPE books. Exam items are based on expected knowledge acquired from job experience, an expected level of industry standard knowledge, or other prerequisites (events, supplemental materials, etc.). Successful completion of the course or study materials alone, does not ensure you will pass the exam.
This exam validates that you can:
20%Foundational storage architectures and technologies Compare different storage technologies (file, block, object storage). Describe and evaluate use cases for drive technologies and RAID levels Describe SAN transport technologies and components Describe and characterize RAID levels Describe and characterize the different SAN topologies, transport technologies, and components Describe storage presentation to hosts (LUN masking, SSP, software-based zoning, etc.) Describe storage virtualization technologies Describe backup, archiving, and data availability technologies Describe single-site storage network management in general terms
20% Functions, features, and capabilities of HPE Storage products, solutions, and warranty service offerings
Position HPE Storage solutions for customer solutions Identify and describe appropriate HPE resources to help a solution meet customer requirements
20% Planning and designing HPE Storage solutions Discover opportunities for single-site environments Size a single-site storage solution for a customer solution Given a set of customer requirements, plan and design a single-site storage solution Review and validate a single-site storage solution proposal Present a single-site solution to a customer
20% Installing, configuring, setting up, and validating HPE Storage solutions Inspect for proper power, rack space, and cooling Plan a solution installation based on a proposed design Install designed solution following best practices Prepare solution for customer use Configure storage according to the solution design Perform tests to verify the solution works as designed Hand over installed HPE solution to the customer
10% Performance-tuning and optimizing HPE Storage solutions Establish a performance baseline from customer application requirements Test performance and collect metrics on configurations and solutions based on customer SLA requirements Analyze an existing solution and identify potential issues improvements Optimize and performance-tune a solution
10% Managing, monitoring, operating, and troubleshooting HPE Storage solutions Use management tools to monitor the customer environment. Deploy and configure additional software tools in customer environment Determine whether software/firmware versions are current and supported in customer environment Perform updates and lifecycle management operations on systems and solutions Perform health checks on solution deployments in customer environments Troubleshoot a solution Develop and manage policy for compliance in a multi-site and/or complex HPE Storage solution
QUESTION 1 You have proposed that a customer replace their legacy SAN switches with new ?-Series switches. They are concerned about the management of all the devices, so you have included SANnav in the proposal. What are the benefits of SANnav that you should emphasize to this customer? (Choose two.)
A. It runs in a dedicated JVM B. It offers automated SPOCK validation C. It runs in a browser D. It provides a global View E. It provides fine-grain visibility into Storage Fabrics (FC + Ethernet)
Answer: C,D
QUESTION 2 A customer has completely virtualized their datacenter with VMware. You propose a SimpliVity solution to replace their aging hardware. Which SimpliVity features should you emphasize in your presentation? (Choose two.)
A. It uses native hypervisor management tools for management B. It utilizes the Storage Management Utility for management and monitoring C. It utilizes a unified ASIC for performance D. It includes deduplication and backup capabilities E. It provides S3 file access
Answer: B,D
QUESTION 3 You want to deploy Recovery Manager Central to a customer environment. How can you accomplish this? (Choose two.)
A. Install RMC on top of the virtual or physical RHEL system using the GUI wizard B. Deploy an RMC virtual appliance to Microsoft Hyper-V hypervisor C. Deploy RMC on top of the virtual CentOS system using the CLI D. Deploy RMC on top of the virtual CentOS system using the GUI wizard E. Install RMC on the top of the virtual or physical RHEL system using the CLI F. Deploy an RMC virtual appliance to VMware ESXi hypervisor
EXAM NUMBER : 1V0-81.20 PRODUCT : NSX-T Data Center 3.0, Workspace ONE 20.X, VMware Carbon Black Cloud EXAM LANGUAGE : English Associated Certification : VCTA-SEC 2021
EXAM OVERVIEW The Associate VMware Security exam tests a candidate’s awareness of VMware’s security solution and the candidate’s ability to provide entry level support for the security features of NSX-T Data Center, Workspace ONE, and Carbon Black Cloud.
SCHEDULE EXAM $125
EXAM INFO : Duration : 120 minutes Number of Questions : 55 Passing Score : 300 Format: Multiple Choice, Multiple Choice Multiple Selection, Drag and Drop, Matching, Proctored
Passing Score Learn More *Your exam may contain unscored questions in addition to the scored questions, this is a standard testing practice. You will not know which questions are unscored, your exam results will reflect your performance on the scored questions only. *VMware exams are scaled on a range from 100-500, with the determined raw cut score scaled to a value of 300. *Scaled scoring allows for raw scores from different VMware exams to be scaled to a consistent value. *Raw passing scores differ between VMware exams based on different technologies or different levels of competency. *A scaled score provides a standard range for test takers and permits direct and fair comparisons of results from one exam form to another.
If a section does not have testable objectives in this version of the exam, it will be noted below, accordingly. The objective numbering may be referenced in your score report at the end of your testing event for further preparation should a retake of the exam be necessary.
Sections Included in this Exam Section 1 – VMware vSphere Architectures and Technologies Objective 1.1 – Describe the anatomy and attack surfaces of a cyber attack Objective 1.1.1 – Describe network based attacks Objective 1.1.2 – Describe social engineering based attacks Objective 1.1.3 – Describe hardware based attacks Objective 1.1.4 – Describe software based attacks Objective 1.2 – Identify common vulnerabilities of enterprise systems Objective 1.3 – Explain common cyber-attack mitigation strategies Objective 1.4 – Explain NSX high level architecture Section 2 – VMware Products and Solutions Objective 2.1 – Describe the VMware security vision Objective 2.2 – Explain Zero-Trust User and device access Objective 2.3 – Explain Zero-Trust for network security Objective 2.4 – Describe Service-Defined Firewall Objective 2.5 – Identify physical and virtual requirements for a defense-in-depth security deployment Objective 2.5.1 – Identify network security requirements Objective 2.5.2 – Identify application security requirements Objective 2.5.3 – Identify endpoint security requirements (Audit & Remediation, EDR, NGAV) Objective 2.6 – Describe the functionalities of VMware’s security vision Objective 2.6.1 – Explain the functionality of NSX gateway firewall and distributed firewall Objective 2.6.2 – Explain the device posture based on network Objective 2.6.3 – Explain the secure tunnel capabilities with Workspace ONE Objective 2.6.4 – Explain the endpoint security capabilities of Carbon Black Objective 2.6.5 – Explain CloudHealth and its role in multi-cloud security solution Objective 2.7 – Describe the functions of edge firewall, internal firewall, and endpoint protection security mechanisms Objective 2.8 – Differentiate between Layer 3 firewalls and Layer 7 firewalls Objective 2.9 – Explain how Workspace ONE UEM facilitates endpoint security Objective 2.10 – Describe how conditional access and modern authentication enforces security Objective 2.11 – Explain how Workspace ONE Intelligence can be used to enforce endpoint security Objective 2.12 – List the features of Workspace ONE Intelligence: RIsk Score, CVE patch remediation, User experience, Intelligence SDK Objective 2.13 – Differentiate between NGAV and traditional AV Objective 2.14 – Describe the benefits and use case for Next Generation Anti-virus Objective 2.15 – Describe the benefits and use case for VMware Carbon Black Cloud Enterprise EDR Objective 2.16 – Explain the use case for Audit and Remediation Objective 2.17 -Describe the different response capabilities in the VMware Carbon Black Cloud Objective 2.18 – Differentiate between the types of reputations seen in the VMware Carbon Black Cloud Section 3 – There are no testable objectives for this section. Section 4 – There are no testable objectives for this section. Section 5 – – There are no testable objectives for this section. Section 6 – – Administrative and Operational Tasks Objective 7.1 – Run compliance evaluation in Workspace ONE UEM against enrolled devices Objective 7.2 – Examine the security violations for enrolled devices Objective 7.3 – Examine compliance policies in Workspace ONE UEM console Objective 7.4 – Identify the security factors configured in the access policies from Workspace ONE Access console Objective 7.5 – Navigate the Workspace ONE Intelligence portal Objective 7.6 – Interpret the dashboards and widgets in Workspace ONE Intelligence Objective 7.7 – Use endpoint data collected in Workspace ONE Intelligence to investigate security issues Objective 7.8 – Navigate the VMware Carbon Black Cloud Objective 7.9 – Create watchlists in the VMware Carbon Black Cloud to detect threats Objective 7.10 – Identify appropriate searches in the VMware Carbon Black Cloud Objective 7.11 – Investigate an alert and describe appropriate response actions in the VMware Carbon Black Cloud Objective 7.12 – Interpret and explain the impact of Rules in the VMware Carbon Black Cloud Objective 7.13 – Perform recommended queries in the VMware Carbon Black Cloud Objective 7.14 – Describe the different mechanisms to Allow-list applications to meet business requirements Objective 7.15 – Perform device activation/enrollment for VMware devices and applications Objective 7.16 – Identify preconfigured firewall rules for the NSX Security Tab Objective 7.17 – Determine firewall rules action Objective 7.18 – Verify validity of firewall rules Recommended Courses VMware Virtual Cloud Network: Core Technical Skills (Coming Soon!) VMware Digital Workspace: Self-Paced (Coming Soon!) VMware Carbon Black Cloud Audit and Remediation VMware Carbon Black Cloud Endpoint Standard VMware Carbon Black Cloud Enterprise EDR References* In addition to the recommended courses, item writers used the following references for information when writing exam questions. It is recommended that you study the reference content as you prepare to take the exam, in addition to any recommended training.
QUESTION 1 Which VMware product allows you to query an endpoint like a database?
A. VMware NSX-T Data Center B. VMware Carbon Black Audit & Remediation C. VMware Workspace ONE UEM D. VMware Carbon Black Endpoint Standard
Answer: C
QUESTION 2 Which three are industry best practices of Zero Trust framework? (Choose three.)
A. Employee machines need to have a passcode profile setup B. Employee machines on Internal network are trusted and have access to all internal resources C. Employee machines are checked for compliance before they get access to applications D. Employees are not required to provide MFA to access internal resources over VPN E. Employees get access to only the required resources to get their job done
Answer: A,C,E
QUESTION 3 Which four alert filters are available in the VMware Carbon Black Cloud Investigate page? (Choose four.)
A. Watchlist B. Target Value C. Policy D. Security Alert List E. Effective Reputation F. Alert Severity