NSE5_FMG-6.4 Fortinet NSE 5 – FortiManager 6.4 Exam

Exam series: NSE5_FMG-6.4
Number of questions: 35
Exam time: 70 minutes
Language: English and Japanese
Product version: FortiManager 6.4
Status: Available
Exam details: exam description

NSE 5 Certification
The Network Security Analyst designation recognizes your ability to implement network security management and analytics using Fortinet security devices. We recommend this course for network and security analysts who require the expertise to centrally manage, analyze, and report on Fortinet security devices. Visit the Fortinet NSE Certification Program page for information about certification requirements.

Fortinet NSE 5—FortiManager 6.4
The Fortinet NSE 5—FortiManager 6.4 exam is part of the NSE 5 Network Security Analyst program, and recognizes the successful candidate’s knowledge of and expertise with FortiManager.

The exam tests applied knowledge of FortiManager configuration, operation, and day-to-day administration, and includes operational scenarios, system configuration, device registration, and troubleshooting.

Audience
The Fortinet NSE 5—FortiManager 6.4 exam is intended for network and security analysts who are responsible for centralized network administration of many FortiGate devices using FortiManager.

Exam Details
Exam name Fortinet NSE 5—FortiManager 6.4
Exam series NSE5_FMG-6.4
Time allowed 70 minutes
Exam questions 35 multiple-choice questions
Scoring Pass or fail, a score report is available from your Pearson VUE account
Language English and Japanese
Product version FortiManager 6.4.1

FortiOS 6.4.1 Exam Topics

Successful candidates have applied knowledge and skills in the following areas and tasks:
* Administration
* Perform initial configuration
* Configure administrative access
* Configure administrative domains (ADOMs)
* Configure workspace mode
* Device Manager
* Register devices in ADOMs
* Troubleshoot device communication issues
* Manage registered devices
* Install device level configuration changes
* Diagnose issues using revision history
* Policy and objects
* Create and install policy packages
* Identify how the ADOM version affects policy and object configurations
* Configure event handlers
* Customize and generate reports
* Troubleshoot reports
* Advanced and additional configuration
* Configure SD-WAN using central management
* Configure security fabric using central management
* Configure FortiManager HA
* Configure FortiManager as local FDS
* Diagnostics and troubleshooting
* Troubleshoot import and installation issues between FortiManager and FortiGate
* Troubleshoot device and ADOM databases

Training Resources

The following resources are recommended for attaining the knowledge and skills that are covered on the exam. The recommended training is available as a foundation for exam preparation. In addition to training, candidates are strongly encouraged to have hands-on experience with the exam topics and objectives.
NSE Training Institute Courses
l NSE 5 FortiManager 6.4

Other Resources

* FortiManager Administration Guide 6.4.1
* FortiManager New Features Guide 6.4.0
* FortiManager Communications Protocol Guide 6.4.0
* FortiManager Best Practices 6.4.0
* FortiManager CLI Reference 6.4.1

Experience
l Minimum of six months to one year of hands-on experience with FortiGate and FortiManager

Exam Sample Questions
A set of sample questions is available from the NSE Training Institute. These questions sample the exam content in question type and content scope. However, the questions do not necessarily represent all the exam content, nor are they intended
to assess an individual’s readiness to take the certification exam.

See the NSE Training Institute for the course that includes the sample questions.

Examination Policies and Procedures
The NSE Training Institute recommends that candidates review exam policies and procedures before registering for the exam. Access important information on the Program Policies page, and find answers to common questions on the FAQ page.

QUESTION 1
Which two statements about the scheduled backup of FortiManager are true? (Choose two.)

A. It does not back up firmware images saved on FortiManager.
B. It can be configured using the CLI and GUI.
C. It backs up all devices and the FortiGuard database.
D. It supports FTP, SCP, and SFTP.

Answer: A,D

QUESTION 2
In addition to the default ADOMs, an administrator has created a new ADOM named Training for FortiGate
devices. The administrator sent a device registration request to FortiManager from a remote FortiGate.
Given the administrator’s actions, which statement is true?

A. The FortiManager administrator must add the unauthorized device to the Training ADOM using the Add Device wizard only.
B. FortiGate will be added automatically to the default ADOM named FortiGate.
C. FortiGate will be automatically added to the Training ADOM.
D. By default, the unauthorized FortiGate will appear in the root ADOM.

Answer: D

QUESTION 3
You are moving managed FortiGate devices from one ADOM to a new ADOM.
Which statement correctly describes the expected result?

A. Any unused objects from a previous ADOM are moved to the new ADOM automatically.
B. The shared policy package will not be moved to the new ADOM.
C. Policy packages will be imported into the new ADOM automatically.
D. Any pending device settings will be installed automatically.

Answer: B

Examkingdom Fortinet NSE5_FMG-6.4 Exam pdf, Certkingdom Fortinet NSE5_FMG-6.4 PDF

MCTS Training, MCITP Trainnig

Best Fortinet NSE5_FMG-6.4 Certification, Fortinet NSE5_FMG-6.4 Training at certkingdom.com

Exam SC-200 Microsoft Security Operations Analyst Update 28 Jan 2022

Audience Profile
The Microsoft Security Operations Analyst collaborates with organizational stakeholders to secure information technology systems for the organization. Their goal is to reduce organizational risk by rapidly remediating active attacks in the environment, advising on improvements to threat protection practices, and referring violations of organizational policies to appropriate stakeholders.
Responsibilities include threat management, monitoring, and response by using a variety of security solutions across their environment. The role primarily investigates, responds to, and hunts for threats using Microsoft Azure Sentinel, Azure Microsoft Defender for Cloud, Microsoft 365 Defender, and third-party security products. Since the security operations analyst consumes the operational output of these tools, they are also a critical stakeholder in the configuration
and deployment of these technologies.

Skills Measured
NOTE: The bullets that follow each of the skills measured are intended to illustrate how we are assessing that skill. This list is NOT definitive or exhaustive.
NOTE: Most questions cover features that are general availability (GA). The exam may contain questions on Preview features if those features are commonly used.

Mitigate threats using Microsoft 365 Defender (25-30%)
Detect, investigate, respond, and remediate threats to the productivity environment by using Microsoft Defender for Office 365
 detect, investigate, respond, and remediate threats to Microsoft Teams, SharePoint, and OneDrive
 detect, investigate, respond, remediate threats to email by using Defender for Office 365
 manage data loss prevention policy alerts
 assess and recommend sensitivity labels
 assess and recommend insider risk policies

Detect, investigate, respond, and remediate endpoint threats by using Microsoft Defender for Endpoint
 manage data retention, alert notification, and advanced features
 configure device attack surface reduction rules
 configure and manage custom detections and alerts
 respond to incidents and alerts
 manage automated investigations and remediations
 assess and recommend endpoint configurations to reduce and remediate vulnerabilities by using the Microsoft’s threat and vulnerability management solution.
 manage Microsoft Defender for Endpoint threat indicators
 analyze Microsoft Defender for Endpoint threat analytics

Detect, investigate, respond, and remediate identity threats
 identify and remediate security risks related to sign-in risk policies
 identify and remediate security risks related to Conditional Access events
 identify and remediate security risks related to Azure Active Directory
 identify and remediate security risks using Secure Score
 identify, investigate, and remediate security risks related to privileged identities
 configure detection alerts in Azure AD Identity Protection
 identify and remediate security risks related to Active Directory Domain Services using Microsoft Defender for Identity

Detect, investigate, respond, and remediate application threats
 identify, investigate, and remediate security risks by using Microsoft Cloud Application Security (MCAS)
 configure MCAS to generate alerts and reports to detect threats

Manage cross-domain investigations in Microsoft 365 Defender portal
 manage incidents across Microsoft 365 Defender products
 manage actions pending approval across products
 perform advanced threat hunting

Mitigate threats using Azure Microsoft Defender for Cloud (25-30%)
Design and configure an Azure DefenderMicrosoft Defender for Cloud implementation
 plan and configure Azure DefenderMicrosoft Defender for Cloud settings, including selecting target subscriptions and workspace
 configure Azure DefenderMicrosoft Defender for Cloud roles
 configure data retention policies
 assess and recommend cloud workload protection

Plan and implement the use of data connectors for ingestion of data sources in Azure
Defender
Microsoft Defender for Cloud
 identify data sources to be ingested for Azure DefenderMicrosoft Defender for Cloud
 configure automated onboarding for Azure resources
 connect on-premises computers
 connect AWS cloud resources
 connect GCP cloud resources
 configure data collection

Manage Azure DefenderMicrosoft Defender for Cloud alert rules

 validate alert configuration
 setup email notifications
 create and manage alert suppression rules

Configure automation and remediation

 configure automated responses in Azure Security CenterMicrosoft Defender for Cloud
 design and configure playbook workflow automation in Azure Security CenterMicrosoft Defender for Cloud
 remediate incidents by using Azure Security CenterMicrosoft Defender for Cloud recommendations
 create an automatic response using an Azure Resource Manager template

Investigate Azure Security CenterMicrosoft Defender for Cloud alerts and incidents
 describe alert types for Azure workloads
 manage security alerts
 manage security incidents
 analyze Azure Security CenterMicrosoft Defender for Cloud threat intelligence
 respond to Azure DefenderMicrosoft Defender Cloud for Key Vault alerts
 manage user data discovered during an investigation

Mitigate threats using Azure SentinelMicrosoft Sentinel (40-45%)
Design and configure an Azure SentinelMicrosoft Sentinel workspace
 plan an Azure SentinelMicrosoft Sentinel workspace
 configure Azure SentinelMicrosoft Sentinel roles
 design Azure SentinelMicrosoft Sentinel data storage
 configure security settings and access for Azure Microsoft Sentinel service security

Plan and Implement the use of data connectors for ingestion of data sources in Azure SentinelMicrosoft Sentinel
 identify data sources to be ingested for Azure SentinelMicrosoft Sentinel
 identify the prerequisites for a data connector
 configure and use Azure SentinelMicrosoft Sentinel data connectors
 configure data connectors by using Azure Policy
 design and configure Syslog and CEF event collections
 design and Configure Windows Security events collections
 configure custom threat intelligence connectors
 create custom logs in Azure Log Analytics to store custom data

Manage Azure SentinelMicrosoft Sentinel analytics rules
 design and configure analytics rules
 create custom analytics rules to detect threats
 activate Microsoft security analytics rules
 configure connector provided scheduled queries
 configure custom scheduled queries
 define incident creation logic

Configure Security Orchestration Automation and Response (SOAR) in Azure SentinelMicrosoft Sentinel
 create Azure Microsoft Sentinel playbooks
 configure rules and incidents to trigger playbooks
 use playbooks to remediate threats
 use playbooks to manage incidents
 use playbooks across Microsoft Defender solutions

Manage Azure SentinelMicrosoft Sentinel Incidents
 investigate incidents in Azure SentinelMicrosoft Sentinel
 triage incidents in Azure SentinelMicrosoft Sentinel
 respond to incidents in Azure SentinelMicrosoft Sentinel
 investigate multi-workspace incidents
 identify advanced threats with User and Entity Behavior Analytics (UEBA)

Use Azure SentinelMicrosoft Sentinel workbooks to analyze and interpret data
 activate and customize Azure SentinelMicrosoft Sentinel workbook templates
 create custom workbooks
 configure advanced visualizations
 view and analyze Azure Microsoft Sentinel data using workbooks
 track incident metrics using the security operations efficiency workbook

Hunt for threats using the Azure SentinelMicrosoft Sentinel portal
 create custom hunting queries
 run hunting queries manually
 monitor hunting queries by using Livestream
 perform advanced hunting with notebooks
 track query results with bookmarks
 use hunting bookmarks for data investigations
 convert a hunting query to an analytical

QUESTION 1
The issue for which team can be resolved by using Microsoft Defender for Endpoint?

A. executive
B. sales
C. marketing

Answer: B

QUESTION 2
The issue for which team can be resolved by using Microsoft Defender for Office 365?

A. executive
B. marketing
C. security
D. sales

Answer: B

QUESTION 3
Your company uses Microsoft Defender for Endpoint.
The company has Microsoft Word documents that contain macros. The documents are used frequently on the
devices of the company’s accounting team.
You need to hide false positive in the Alerts queue, while maintaining the existing security posture.
Which three actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.

A. Resolve the alert automatically.
B. Hide the alert.
C. Create a suppression rule scoped to any device.
D. Create a suppression rule scoped to a device group.
E. Generate the alert.

Answer: B,C,E


QUESTION 4
You are investigating a potential attack that deploys a new ransomware strain.
You have three custom device groups. The groups contain devices that store highly sensitive information.
You plan to perform automated actions on all devices.
You need to be able to temporarily group the machines to perform actions on the devices.
Which three actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.

A. Assign a tag to the device group.
B. Add the device users to the admin role.
C. Add a tag to the machines.
D. Create a new device group that has a rank of 1.
E. Create a new admin role.
F. Create a new device group that has a rank of 4.

Answer: A,C,D

Examkingdom Microsoft SC-200 Exam pdf, Certkingdom Microsoft SC-200 PDF

MCTS Training, MCITP Trainnig

Best Microsoft SC-200 Certification, Microsoft SC-200 Training at certkingdom.com

NetSuite ERP Consultant NetSuite ERP Consultant Exam

About the NetSuite ERP Consultant Exam
This is the second exam required for NetSuite ERP Consultant Certification, to be taken after the SuiteFoundation Exam has been passed.
Passing both the NetSuite ERP Consultant Exam and the SuiteFoundation Exam certifies that you have the knowledge and skills necessary to be a NetSuite Certified ERP Consultant. See Description of a Qualified Candidate below.

Conditions:
• This will be a proctored examination.
• No written or online reference materials may be used during the exam.
• Complete all questions in the allotted time..

Description of a Qualified Candidate
The candidate has the equivalent experience of performing 5-10 medium scale, or 2-3 enterprise NetSuite ERP implementations, which is roughly equivalent to at least 2 years’ worth of NetSuite implementations in a consultant-related role. They can match NetSuite ERP solutions to business requirements. This person can advise on how to change standard ERP workflows, when to use scripting tools to meet the business needs, and when to extend use through integrations. This consultant can explain the implications and benefits of NetSuite configuration options.

Maintaining Your Certification
For details about retake policy and ongoing requirements to maintain your certification or examination status see the NS Certification Policy available on the NS Certification webpage.

Recommended Skill Level:
Equivalent to at least 2 years’ worth of NetSuite implementations in a consultant-related role
Subject Areas and Objectives Covered by the Test
The broad subject areas covered in the exam include:
• ERP
• Analytics
• OneWorld
• Platform
• Data Strategy

The table below provides each subject area in more detail. For suggestions on studying for each objective, proceed to the matching sections on the pages that follow.

Subject Objective
ERP: Record to Report (1)
Identify the Features and Preferences which need to be configured to support a customer specific requirement.
Identify the Best Practices of setting up the NetSuite Chart of Accounts.
Identify how NetSuite Segments are handled in NetSuite, and when to use custom segments.
Recognize NetSuite recommended Best Practices around period end close, including when Multi-Book is used.
Identify the constraints of features and functionality related to Journal Entries.
Identify how exchange rates are used throughout NetSuite.
Identify how NetSuite taxes are configured.

ERP: Procure to Pay (2)
Identify considerations when setting up related Entity records.
Given a requirement, determine the configuration/solution for an order process.
Identify the implications of shipping setup and use.
Determine the considerations for setup and execution of the fulfillment process.
Given a scenario, determine appropriate invoicing configuration and setup and when consolidated invoicing feature is appropriate.
Understand the different components of the ARM or Basic Rev Rec module and be able to set it up to adapt to the recognition method elected by the user.
Identify recommended practices for customer payment setup and processing.
Given a requirement, determine the configuration/solution for a customer return process.

ERP: Design to Build (3)
Select the appropriate Item Type for a given use case.
Given a scenario, determine appropriate inventory management options.
Identify the transactions and records related to building Assemblies.
Identify options in pricing.
Recognize how various transactions affect item costing.

ERP: Procure to Pay (4)
Given a requirement, determine the configuration/solution for a purchase and receiving process.
Given a requirement, determine the configuration/solution for a Vendor Bill, Bill Payments.
Given a requirement, determine the configuration/solution for a vendor return process.
Recognize the elements in configuring the Fixed Assets Module.

Analytics and Dashboard (5)
Given a scenario, select the appropriate Dashboard portlet content.
Recognize which SQL expressions will yield desired search results. (CASE, DECODE, NULLIF, Date functions)
Identify configuration options available when customizing email alerts for saved searches.
Using Expression Builder, select the expression which would yield the desired results.
Recognize which record to use as the basis of a search to yield a desired result.
Identify the feature in saved searches to obtain the described results.
Given a scenario, determine whether to use a custom search or a report.
Identify the advantages of various methods of publishing dashboards.

Platform (6)
Given a situation identify whether a SuiteScript or SuiteFlow required to meet a customer requirement.
Identify components of the workflow in a diagram.
Recognize available SuiteFlow actions.
Compare differences between custom form & field display options and uses cases where appropriate.
Identify various methods of restricting users to a particular custom form.
Given a custom form layout requirement, determine which feature(s) should be used, including Advanced PDF.
Given a situation identify when to use SuiteBundler.
Identify custom field settings for displaying data from other records.
Recognize NetSuite recommended practices for creation and use of custom fields.
Identify implications of changing field properties or values in a live environment.

Data Strategy (7)
Determine the proper use of advanced options in CSV Import.
Understand the use of CSV templates and saved mappings.
Analyze an error in the CSV Import process to determine the appropriate resolution.
Given a customer requirement, determine migration strategy for transaction history and opening balances and inventory counts.
Organize the steps required to successfully complete an import of CSV records and sublist data.

Data Security
Identify the implications of permissions related to accessing and manipulating data in bulk.

(8) Recognize recommended NetSuite practices for creating and assigning custom roles.

QUESTION 1
A company purchased and received 100 chairs for a conference room. Four of the chairs were the wrong style and must be returned.
What steps are performed after approving the Vendor Return Authorization?

A. Shipping the Return > Close Return
B. Shipping the Return > Crediting the Return
C. Shipping the Return > Create a Journal Entry
D. Shipping the Return > Mark Shipped on the Return Authorization

Answer: D

QUESTION 2
Which statement is true about Drop Ship and Special Order items?

A. Can be used for Non-Inventory items for Resale and Inventory items.
B. Items can be marked as both Drop Ship and Special Order,
C. Vendor ships items to customer’s address.
D. Impact Asset and Cost of Goods Sold (COGS) accounts upon item receipt and fulfillment.

Answer: C

QUESTION 3
Which is valid permission level for Persist Search?

A. Create
B. View
C. Edit
D. Full

Answer: A

Examkingdom NetSuite ERP Consultant Exam pdf, Certkingdom NetSuite ERP Consultant PDF

MCTS Training, MCITP Trainnig

Best NetSuite ERP Consultant Certification, NetSuite ERP Consultant Training at certkingdom.com

300-420 Designing Cisco Enterprise Networks (ENSLD) Exam

Exam Description
The Designing Cisco Enterprise Networks v1.0 (ENSLD 300-420) exam is a 90-minute exam associated with the CCNP Enterprise and Cisco Certified Specialist – Enterprise Design certifications. This exam certifies a candidate’s knowledge of enterprise design including advanced addressing and routing solutions, advanced enterprise campus networks, WAN, security services, network services, and SDA. The course, Designing Cisco Enterprise Networks, helps candidates to prepare for this exam.

The following topics are general guidelines for the content likely to be included on the exam. However, other related topics may also appear on any specific delivery of the exam. To better reflect the contents of the exam and for clarity purposes, the guidelines below may change at any time without notice.

25% 1.0 Advanced Addressing and Routing Solutions
1.1 Create structured addressing plans for IPv4 and IPv6
1.2 Create stable, secure, and scalable routing designs for IS-IS
1.3 Create stable, secure, and scalable routing designs for EIGRP
1.4 Create stable, secure, and scalable routing designs for OSPF
1.5 Create stable, secure, and scalable routing designs for BGP
1.5.a Address families
1.5.b Basic route filtering
1.5.c Attributes for path preference
1.5.d Route reflectors
1.5.e Load sharing
1.6 Determine IPv6 migration strategies
1.6.a Overlay (tunneling)
1.6.b Native (dual-stacking)
1.6.c Boundaries (IPv4/IPv6 translations)

25% 2.0 Advanced Enterprise Campus Networks
2.1 Design campus networks for high availability
2.1.a First Hop Redundancy Protocols
2.1.b Platform abstraction techniques
2.1.c Graceful restart
2.1.d BFD
2.2 Design campus Layer 2 infrastructures
2.2.a STP scalability
2.2.b Fast convergence
2.2.c Loop-free technologies
2.2.d PoE and WoL
2.3 Design multicampus Layer 3 infrastructures
2.3.a Convergence
2.3.b Load sharing
2.3.c Route summarization
2.3.d Route filtering
2.3.e VRFs
2.3.f Optimal topologies
2.3.g Redistribution
2.4 Describe SD-Access Architecture (underlay, overlay, control and data plane, automation, wireless, and security)
2.5 Describe SD-Access fabric design considerations for wired and wireless access (overlay, fabric design, control plan design, border design, segmentation, virtual networks, scalability, over the top and fabric for wireless, multicast)

20% 3.0 WAN for Enterprise Networks

3.1 Compare WAN connectivity options
3.1.a Layer 2 VPN
3.1.b MPLS Layer 3 VPN
3.1.c Metro Ethernet
3.1.d DWDM
3.1.e 4G/5G
3.1.f SD-WAN customer edge
3.2 Design site-to-site VPN
3.2.a Dynamic Multipoint VPN (DMVPN)
3.2.b Layer 2 VPN
3.2.c MPLS Layer 3 VPN
3.2.d IPsec
3.2.e Generic Routing Encapsulation (GRE)
3.2.f Group Encrypted Transport VPN (GET VPN)
3.3 Design high availability for enterprise WAN
3.3.a Single-homed
3.3.b Multihomed
3.3.c Backup connectivity
3.3.d Failover
3.4 Describe Cisco SD-WAN Architecture (orchestration plane, management plane, control plane, data plane, on-boarding and provisioning, security)
3.5 Describe Cisco SD-WAN design considerations (control plane design, overlay design, LAN design, high availability, redundancy, scalability, security design, QoS and multicast over SD-WAN fabric)

20% 4.0 Network Services

4.1 Select appropriate QoS strategies to meet customer requirements (DiffServ, IntServ)
4.2 Design end-to-end QoS policies
4.2.a Classification and marking
4.2.b Shaping
4.2.c Policing
4.2.d Queuing
4.3 Design network management techniques
4.3.a In-band vs. out-of-band
4.3.b Segmented management networks
4.3.c Prioritizing network management traffic
4.4 Describe multicast routing concepts (source trees, shared trees, RPF, rendezvous points)
4.5 Design multicast services (SSM, PIM bidirectional, MSDP)

10% 5.0 Automation

5.1 Choose the correct YANG data model set based on requirements
5.2 Differentiate between IETF, Openconfig, and Cisco native YANG models
5.3 Differentiate between NETCONF and RESTCONF
5.4 Describe the impact of model-driven telemetry on the network
5.4.a Periodic publication
5.4.b On-change publication
5.5 Compare dial-in and dial-out approaches to model-driven telemetry


QUESTION 1
Which two BGP features will result in successful route exchanges between eBGP neighbors sharing the same
AS number (Choose two.)

A. advertise-best-external
B. bestpath as-path ignore
C. client-to-client reflection
D. as-override
E. allow-as-in

Correct Answer DE


QUESTION 2
A customer with an IPv4 only network topology wants to enable IPv6 connectivity while preserving the IPv4
topology services. The customer plans to migrate IPv4 services to the IPv6 topology, then decommission the
IPv4 topology. Which topology supports these requirements

A. dual stack
B. 6VPE
C. 6to4
D. NAT64

Correct Answer A


QUESTION 3
A company is running BGP on a single router, which has two connections to the same ISP.
Which BGP feature ensures traffic is load balanced across the two links to the ISP

A. Multihop
B. Multipath Load Sharing
C. Next-Hop Address Tracking
D. AS-Path Prepending

Correct Answer B

Examkingdom Cisco 300-420 Exam pdf, Certkingdom Cisco 300-420 PDF

MCTS Training, MCITP Trainnig

Best Cisco 300-420 Certification, Cisco 300-420 Training at certkingdom.com

P_TSEC10_75 SAP Certified Technology Professional – System Security Architect Exam

Sub-solution: Administration
Delivery Methods: SAP Certification
Level: Professional
Exam: 80 questions
Cut Score: 63%
Duration: 180 mins
Expiry Date: 31 Mar 2022
Languages: English

Description
The “SAP Certified Technology Professional – System Security Architect” certification exam verifies that the candidate possesses the deep knowledge required in the area of SAP System Security and Authorization. This certificate proves that the candidate has an advanced understanding within the Technology Consultant profile and is able to apply these skills practically and provide guidance in SAP project implementation in the role of a SAP Security Architect. Furthermore the holder of this certification is capable to review and evaluate the security level of complex on-premise, cloud and hybrid system architectures.

Notes
To ensure success, SAP recommends combining education courses and hands-on experience to prepare for your certification exam as questions will test your ability to apply the knowledge you have gained in training.

You are not allowed to use any reference materials during the certification test (no access to online documentation or to any SAP system).

Topic Areas
Please see below the list of topics that may be covered within this certification and the courses that cover them. Its accuracy does not constitute a legitimate claim; SAP reserves the right to update the exam content (topics, items, weighting) at any time.

Topic Areas
Please see below the list of topics that may be covered within this certification and the courses that cover them. Its accuracy does not constitute a legitimate claim; SAP reserves the right to update the exam content (topics, items, weighting) at any time.

SAP System Security Fundamentals > 12%

Explain SAP system security fundamentals
ADM900 (NW AS 7.52)
ADM940 (SAP S/4HANA 1909)
ADM960 (SAP NETWEAVER 7.55)

Authorization Concept for SAP Business Suite > 12%
Describe and implement the authorization concept for SAP Business Suite
ADM900 (NW AS 7.52)
ADM940 (SAP S/4HANA 1909)

Authorization Concept for SAP S/4HANA > 12%
Describe and implement the authorization concept for SAP S/4HANA
ADM945 (SAP S/4HANA 1809)

Secure SAP System Management > 12%
Explain how to secure an SAP system and conduct security checks
ADM950 (SEE COURSE DETAIL)
ADM960 (SAP NETWEAVER 7.55)

SAP Netweaver Application Server Security > 12%
Describe and implement security in a SAP NetWeaver Application Server
ADM960 (SAP NETWEAVER 7.55)

Authorization, Security and Scenarios in SAP HANA 8% – 12%
Explain authorization, security and scenarios in SAP HANA
HA240 (SAP HANA 2.0 SPS04)

Security in SAP Gateway and SAP Fiori System Landscape < 8%
Explain security in SAP Gateway and SAP Fiori system landscape.
GW100 (SAP S/4HANA 1809)
UX200 (SAP S/4HANA 2020)

General Information

Exam Preparation
All SAP consultant certifications are available as Cloud Certifications in the Certification Hub and can be booked with product code CER006. With CER006 – SAP Certification in the Cloud, you can take up to six exams attempts of your choice in one year – from wherever and whenever it suits you! Test dates can be chosen and booked individually.

Each specific certification comes with its own set of preparation tactics. We define them as “Topic Areas” and they can be found on each exam description. You can find the number of questions, the duration of the exam, what areas you will be tested on, and recommended course work and content you can reference.

Certification exams might contain unscored items that are being tested for upcoming releases of the exam. These unscored items are randomly distributed across the certification topics and are not counted towards the final score. The total number of items of an examination as advertised in the Training Shop is never exceeded when unscored items are used.

Please be aware that the professional- level certification also requires several years of practical on-the-job experience and addresses real-life scenarios.

For more information refer to our FAQs.
SAP Global Certification FAQ – Overview
SAP Global Certification FAQ – Exam Process
SAP Global Certification FAQ – Post-Exam Process

Safeguarding the Value of Certification
SAP Education has worked hard together with the Certification & Enablement Influence Council to enhance the value of certification and improve the exams. An increasing number of customers and partners are now looking towards certification as a reliable benchmark to safeguard their investments. Unfortunately, the increased demand for certification has brought with it a growing number of people who to try and attain SAP certification through unfair means. This ongoing issue has prompted SAP Education to place a new focus on test security. Please take a look at our post to understand what you can do to help to protect the credibility of your certification status.

Our Certification Test Security Guidelines will help you as test taker to understand the testing experience.

QUESTION 1
You want to use Configuration Validation functionality in SAP Solution Manager to check the consistency of settings across your SAP environment.
What serves as the reference basis for Configuration Validation? (Choose two.)

A. A virtual set of manually maintained configuration items
B. A result list of configuration items from SAP EarlyWatch Alert (EWA)
C. A target system in your system landscape
D. A list of recommended settings attached to a specific SAP Note

Answer: A,C

QUESTION 2
What is the User Management Engine (UME) property ?connection pooling? used for? (Choose two.)

A. To share server resources among requesting LDAP clients
B. To improve performance of requests to the LDAP directory server
C. To avoid unauthorized request to the LDAP directory server
D. To create a new connection to the LDAP directory server for each request

Answer: A,B

QUESTION 3
Which of the following events will create security alerts in the CCMS Alert Monitor of SAP Solution Manager? (Choose two.)

A. Changes to the instance profile
B. Call of RFC functions
C. Start of reports
D. Manual table changes

Answer: B,C

Examkingdom SAP P_TSEC10_75 Exam pdf, Certkingdom SAP P_TSEC10_75 PDF

MCTS Training, MCITP Trainnig

Best SAP P_TSEC10_75 Certification, SAP P_TSEC10_75 Training at certkingdom.com

NCP-MCA Nutanix Certified Professional – Multicloud Automation Exam

1.The Exam
1.1 PURPOSE OF EXAM
The Nutanix Certified Professional – Multicloud Automation 5 exam tests that candidates are comfortable with principles of automation, as well as the automation of infrastructure and single/multi-tiered applications within the Nutanix platform. Successful candidates demonstrate mastery of these skills and abilities.

1.2 NUMBER OF QUESTIONS
The exam consists of 75 multiple choice and multiple response questions.

1.3 PRICING
The cost for the NCP-MCA 5 exam is $199 USD.

1.4 PASSING SCORE
The passing score for this exam is 3000, using a scaled scoring method. The scale is from 1000-6000. Scaled scores are calculated using a mathematical formula that considers a variety of factors, including the number and type of exam questions included in a specific version of the exam. Because this combination may vary in different versions of the same examination, scaled scores provide a fair score for everyone based on the version of the exam taken.

1.5 HOW OBJECTIVES RELATE TO QUESTIONS ON THE EXAM
Objectives summarize what the test is designed to measure. Objectives are developed by Exam Developers and Subject Matter Experts based on identified tasks that relate to the job of automating infrastructure and single/multi-tiered applications within a Nutanix multicloud environment. Once the initial development process is complete, these objectives are verified using an external group of individuals in the actual job role. Finally, a number of questions is determined for each objective, which relates directly to the criticality of the task in the job role.

1.6 LANGUAGES
The exam is available in English, Japanese, and Simplified Chinese.

1.7 TIME LIMIT
The time limit for the exam 120 minutes.

1.8 SCHEDULING AND TAKING THE EXAM
This exam is delivered via remote proctoring. After registering for the exam and providing valid identification, you will receive information on how to take the exam from your location using a web browser. Because the exam is remote proctored, you will be provided with a locked down, monitored, secure exam experience.

1.9 CERTIFICATION TRACKS
The Nutanix Certified Professional – Multicloud Automation 5 exam is a core component of the Nutanix Multicloud Automation Management track. Passing this exam results in achieving the NCP-MCA 5 certification.
The certification requires a passing score on the exam. While it is not required that you attend a course, Nutanix provides training that covers the objectives on the exam.

Details on the course and track are provided in section 4.

1.10 RETAKE POLICY
If a candidate fails an exam on the first attempt, he or she is allowed two additional attempts. There is a seven-day waiting period between attempts. Like the first attempt, these are paid for individually and Nutanix recommends that you allow sufficient time between attempts to be properly prepared and to maximize your chances for success.

Please note: After three attempts, you will be unable to take the exam for 60 days, after which you can e-mail university@nutanix.com and request that your attempts are reset. Nutanix recommends you utilize the time to thoroughly review this guide and the related references and/or take the recommended training for this exam.

1.11 EXAM SECURITY
Nutanix reserves the right to refuse certifying a candidate who violates exam security policies. This includes copying and redistribution of exam material, using any type of study material during the exam itself, attempting to photograph exam items and taking an exam using a false identity. Your identity is captured as part of the exam registration process and must be validated before you will be allowed to take the exam.

1.12 RECERTIFICATION
Once you have passed the Nutanix Certified Professional – Multicloud Automation 5 exam and achieved the NCP-MCA 5 certification, it will remain valid until Nutanix releases the next version of the certification. At that time, you have one year to upgrade your certification to the new release before it expires.

2. Intended Audience
2.1 INTENDED AUDIENCE
A candidate for the Nutanix Certified Professional – Multicloud Automation 5 exam and NCP-MCA 5 certification understands IT automation and how Nutanix multicloud solutions help to automate infrastructure and applications. Candidates for the NCP-MCA 5 certification will understand how to design single and multi-tiered applications and will be able to incorporate manual tasks in the automation process.

Successful candidates have at least 3-6 months of automation experience within the Nutanix environment. They are typically IT professionals interested in pursuing, or currently engaged in, DevOps concepts and/or multicloud automation using Nutanix technologies. The successful candidate will most likely have taken training courses such as the Nutanix Multicloud Automation Administration (NMCAA) course.

3. Objectives covered in the NCP-MCA 5 Exam

3.1 INTRODUCTION
It is recommended that candidates have the knowledge and skills necessary to understand IT automation, leverage Nutanix multicloud solutions to automate infrastructure and single/multi-tiered applications, and be comfortable incorporating
manual tasks in the automation process before attempting the Nutanix Certified Professional – Multicloud Automation 5 exam. It is also recommended that the candidate complete the training course described in Section 4 prior to taking the exam.
For the NCP-MCA 5 certification, candidates will be tested on the following software versions:

• AOS and Prism: version 5.20
• Calm: version 3.2.2
3.2 OBJECTIVES
Prior to taking this exam, candidates should understand each of the following objectives.
Each objective is listed below; along with related tools the candidate should have experience with, and related documentation that contains information relevant to the objective. Please note that some documentation requires access via the Support Portal. Information on creating an account for use with the Support Portal can be found here. All objectives may also be referenced in other product documentation not specifically highlighted below. The candidate should be familiar with all relevant product documentation or have the equivalent skills.

Section 1 – Describe and Differentiate Automation Concepts and Principles
Objective 1.1 – Determine and apply the steps required to automate a given manual process Knowledge:
• Determine the logical steps in automating a process
• Determine how to use Calm to complete pre-provisioning steps References
• Runbooks Overview
• Setting up the Service Dependencies
• Blueprint Overview
• Nutanix API Reference Guides
• Runbook Usage
• Pre-Create Tasks
• Multi-VM Blueprints
• Exporting or Importing Playbooks
• Nutanix Integration with ServiceNow
• Creating Playbooks Using Triggers
• How to find a VM creation date and time in Prism Central

Objective 1.2 – Demonstrate an understanding of event-driven Playbooks Knowledge:
• Demonstrate an understanding of how to create a Playbook by setting a trigger and defining actions
• Demonstrate an understanding of how to correctly arrange Playbook steps using X-Play References
• Creating Playbooks Using Triggers
• Prerequisites for VM Actions in Playbooks
• Creating Playbooks Using Webhook
• Playbook Actions
• Creating Playbooks Using Event
• Configuring Manual Parameters
• Enhance your Playbooks with REST APIs
• Create Custom Parameters in your Playbook
• How to Reuse Playbooks Across Multiple Environments

Objective 1.3 – Define the components of X-Play Knowledge:
• Define Playbook’s action gallery and plays
• Identify alerts and manual triggers References
• Task Automation
• Plays
• Creating Playbooks Using Event
• Running a Playbook (Manual Trigger)
• Automated Remediation with X-Play
• What’s New in Prism Pro X-Play

Objective 1.4 – Demonstrate an understanding of Categories Knowledge:
• Demonstrate an understanding of how to create Categories
• Explain the effects of Categories
• Given a Category and a blueprint, infer is a policy will be applied to a VM References
• Assigning a Category
• Categories Summary View
• Category Management
• Creating Custom Alert Policies
• Assigning a Role
• Modifying a Category
• Configuring an Image Placement Policy
• Adding Custom Alert Policies

Section 2 – Deploy and Configure Calm and Related Components
Objective 2.1 – Create a blueprint to deploy infrastructure and applications using Calm Knowledge:
• Demonstrate an understanding of how to create a Calm blueprint
• Demonstrate an understanding of how to create a substrate
• Demonstrate an understanding of how to create a task
• Demonstrate an understanding of how to configure install/uninstall packages
• Determine which task type to use per script language/function
• Demonstrate an understanding of prerequisites for Calm deployment
• Demonstrate an understanding of how to utilize Calm built-in macros within blueprints
• Demonstrate an understanding of how to set application infrastructure requirements related to automation optimization References
• Account Configuration
• Adding Accounts to a Project
• Adding and Configuring Scale Out and Scale In
• Macros Overview
• Settings Configuration
• Pre-Create and Post-Delete Task Overview
• Setting up the Service Dependencies
• Publishing a Blueprint
• Prerequisites for Enabling Nutanix Calm
• Nutanix Calm Overview
• Adding an Action

Objective 2.2 – Demonstrate an understanding of Calm managed infrastructure and applications Knowledge:
• Demonstrate an understanding of how to scale out
• Demonstrate an understanding of how to retire a managed application
• Demonstrate an understanding of how to manage an application
• Demonstrate an understanding of how to run a task References
• Adding and Configuring an Application Profile
• Creating a Brownfield Application
• Calm Administration – Manage Tab
• Calm Administration – Getting Started
• Calm Roles and Responsibilities Matrix
• Calm Library Overview
• Adding and Configuring Scale Out and Scale In
• Calm Blueprints Public Repository
• Calm Benchmarks
• Calm Onboarding
• Discover the Applications Running in your Environment

Objective 2.3 – Apply typical configuration settings to a Calm deployment Knowledge:
• Describe how to configure providers
• Describe how to configure a project
• Define user permissions and roles
• Recall the requirements for setting up environments within Calm to deploy to various supported Clouds
• Define Marketplace capabilities References
• Enabling Showback
• Providers Overview
• Calm – Software as a Service (SaaS)
• Calm SaaS API Key Management
• Marketplace Manager Overview
• Calm Pre-Configuration

Objective 2.4 – Identify common blueprint features Knowledge:
• Identify built-in macros, tasks, and action/task dependencies
• Identify blueprint features to include authentication credentials in a cloudinit and sysprep
• Identify endpoints
• Describe how to execute Calm Runbooks within blueprints
• Recognize the syntax of a macro References
• Built-in Macros
• Adding and Configuring an Application Profile
• Pre-Create Tasks
• Runtime Variables Overview
• Configuring Nutanix and Existing Machine VM, Package, and Service
• Blueprints Usage

Objective 2.5 – Describe the features and requirements of Calm Knowledge:
• Describe Calm requirements
• Define Calm use cases
• Define Calm
• Describe plugin use cases
• Demonstrate an understanding of how to execute deployment of MySQL, SQL, MariaDB, Oracle, PostgreSQL, and single/clustered databases
• Illustrate zero-byte clones and obfuscation for dev test scenarios References
• Calm Marketplace Overview
• Macros Overview
• Enabling Nutanix Calm
• Providers Overview
• Prerequisites for Enabling Nutanix Calm
• Runbook Usage
• Enabling Policy Engine
• Getting Started with Nutanix Calm

Section 3 – Validate Blueprints, Playbooks, and Automation Settings
Objective 3.1 – Understand the typical causes of a blueprint deployment failure Knowledge:
• Demonstrate an understanding of audit trails
• Given an error message, explain the cause of the issue References
• Calm – Authentication failure during task execution
• Multi-VM Blueprints
• Calm – App deployment/deletion fails when deployed VM has no IP address
• Calm Project Environments

Objective 3.2 – Describe where to find information to assist in validation Knowledge:
• Given an issue with Calm, locate troubleshooting data and collect the logs
• Given an issue with Playbooks, locate troubleshooting data and collect the logs References
• Plays
• Downloading an Application Log Archive
• Calm – Azure Troubleshooting
• Calm Administration – Audit Tab
• Approving and Publishing a Blueprint or Runbook

Objective 3.3 – Validate typical Playbook configurations Knowledge:
• Interpret data/logs to determine the cause for a given issue
• Given a Playbook and symptom, explain a given issue References
• Playbook Actions
• Creating Playbooks Using a Predefined Alert
• Prerequisites for VM Actions in Playbooks
• Task Automation

Objective 3.4 – Understand the typical causes of issues associated with automation Knowledge:
• Given a scenario, interpret a given issue
• Given a log, infer what type of issue may be present
• Explain how to optimize a workflow to align with best practices
• Given a scenario, explain how to triage and narrow down the possible root causes of an issue References
• Calm showback is not able to reach beam service
• Onboarding Nutanix Account
• Calm – Category OSType errors during Blueprint launch
• Calm – Blueprint Save validation errors for VMware blueprints
• Calm – Logon session errors when importing scripts that map network drives
• Integrated Linux Based Powershell Gateway Errors
• Calm – Windows Blueprint process creation failure errors
• Removing a Database Server VM or Database Server Cluster

4. NCP-MCA 5 Training Recommendations


QUESTION 1

In order to give Consumers the ability to modify attributes, what should the Blueprint creator implement in the design?

A. Custom actions
B. eScript task with custom macros
C. Runtime variables
D. https: task with built-in macros

Answer: C

QUESTION 2
Which action should an administrator use to request a static IP address from an IPAM solution?

A. Profile
B. Pre-create
C. Guest Customization
D. Create

Answer: D

QUESTION 3
An administrator wants to be alerted when production VMs become idle. The VMs will be determined to be idle
when CPU usage is lower than 5% for more than 5 minutes. All affected VMs are categorized as
Environment:Production, since they have Flow microsegmentation rules.
What should the administrator do to satisfy this requirement?

A. Create an alert for all VMs, create a Playbook with this alert as the trigger and send an email as the action.
B. Create an alert for VMs in the correct category, create a Playbook with this alert as the trigger > take a snapshot > send an email as the action.
C. Create an alert for all VMs, create a Playbook with this alert as the trigger > reduce 1 CPU > send an email as the action.
D. Create an alert for VMs in the correct category, create a Playbook with this alert as the trigger and send an email as the action.

Answer: A

Examkingdom Nutanix NCP-MCA Exam pdf, Certkingdom Nutanix NCP-MCA PDF

MCTS Training, MCITP Trainnig

Best Nutanix NCP-MCA Certification, Nutanix NCP-MCA Training at certkingdom.com

Exam DP-420 Designing and Implementing Cloud-Native Applications Using Microsoft Azure Cosmos DB (beta) Exam

Candidates for this exam must have solid knowledge and experience developing apps for Azure and working with Azure Cosmos DB database technologies. They should be proficient at developing applications by using the Core (SQL) API and SDKs, writing efficient queries and creating appropriate index policies, provisioning and managing resources in Azure, and creating server-side objects with JavaScript. They should be able to interpret JSON, read C# or Java code, and use PowerShell.

Note: Passing score: 700. Learn more about exam scores here. Beta exams are not scored immediately because we are gathering data on the quality of the questions and the exam. Learn more about the value and importance of beta exams.

Part of the requirements for: Microsoft Certified: Azure Cosmos DB Developer Specialty
Related exams: none
Important: See details
Go to Certification Dashboard
Exam DP-420: Designing and Implementing Cloud-Native Applications Using Microsoft Azure Cosmos DB (beta)
Languages: English
Retirement date: none

This exam measures your ability to accomplish the following technical tasks: design and implement data models; design and implement data distribution; integrate an Azure Cosmos DB solution; optimize an Azure Cosmos DB solution; and maintain an Azure Cosmos DB solution.

Skills measured
Design and implement data models (35–40%)
Design and implement data distribution (5–10%)
Integrate an Azure Cosmos DB solution (5–10%)
Optimize an Azure Cosmos DB solution (15–20%)
Maintain an Azure Cosmos DB solution (25–30%)


Scores needed to pass exams
Technical exams: All technical exam scores are reported on a scale of 1 to 1,000 and are scaled such that the passing score is 700. Any score of 700 or greater is a "pass." Any score below 700 is a "fail." The actual number of items you need to answer correctly to pass is determined by a group of subject matter experts in conjunction with the Microsoft psychometrician during the development and maintenance of the exam. The passing score is based on the knowledge and skills needed to demonstrate competence in the skill domain and the difficulty of the questions that a candidate answers when taking the exam.
Microsoft Office exams: All Microsoft Office exam scores are reported on a scale of 1 to 1,000. The passing score varies from exam to exam and is provided on the score report. The actual cut score percentage is determined by a group of subject matter experts using a process like that used to set the cut score for Microsoft’s technical exams.

Audience Profile
A candidate for the Azure Cosmos DB Developer Specialty certification should have subject matter expertise designing, implementing, and monitoring cloud-native applications that store and manage data.
Responsibilities for this role include designing and implementing data models and data distribution, loading data into an Azure Cosmos DB database, and optimizing and maintaining the solution. These professionals integrate the solution with other Azure services. They also design, implement, and monitor solutions that consider security, availability, resilience, and performance requirements.
A candidate for this exam must have solid knowledge and experience developing apps for Azure and working with Azure Cosmos DB database technologies. They should be proficient at developing applications by using the Core (SQL) API and SDKs, writing efficient queries and creating appropriate index policies, provisioning and managing resources in Azure, and creating server-side objects with JavaScript. They should be able to interpret JSON, read C# or Java code, and use PowerShell.

Skills Measured
NOTE: The bullets that follow each of the skills measured are intended to illustrate how we are assessing that skill. This list is NOT definitive or exhaustive.
NOTE: Most questions cover features that are general availability (GA). The exam may contain questions on Preview features if those features are commonly used.

Design and Implement Data Models (35–40%)
Design and implement a non-relational data model for Azure Cosmos DB Core API
 develop a design by storing multiple entity types in the same container
 develop a design by storing multiple related entities in the same document
 develop a model that denormalizes data across documents
 develop a design by referencing between documents
 identify primary and unique keys
 identify data and associated access patterns
 specify a default TTL on a container for a transactional store

Design a data partitioning strategy for Azure Cosmos DB Core API
 choose a partition strategy based on a specific workload
 choose a partition key
 plan for transactions when choosing a partition key
 evaluate the cost of using a cross-partition query
 calculate and evaluate data distribution based on partition key selection
 calculate and evaluate throughput distribution based on partition key selection
 construct and implement a synthetic partition key
 design partitioning for workloads that require multiple partition keys

Plan and implement sizing and scaling for a database created with Azure Cosmos DB
 evaluate the throughput and data storage requirements for a specific workload
 choose between serverless and provisioned models
 choose when to use database-level provisioned throughput
 design for granular scale units and resource governance
 evaluate the cost of the global distribution of data
 configure throughput for Azure Cosmos DB by using the Azure portal

Implement client connectivity options in the Azure Cosmos DB SDK

 choose a connectivity mode (gateway versus direct)
 implement a connectivity mode
 create a connection to a database
 enable offline development by using the Azure Cosmos DB emulator
 handle connection errors
 implement a singleton for the client
 specify a region for global distribution
 configure client-side threading and parallelism options
 enable SDK logging

Implement data access by using the Azure Cosmos DB SQL language
 implement queries that use arrays, nested objects, aggregation, and ordering
 implement a correlated subquery
 implement queries that use array and type-checking functions
 implement queries that use mathematical, string, and date functions
 implement queries based on variable data

Implement data access by using SQL API SDKs
 choose when to use a point operation versus a query operation
 implement a point operation that creates, updates, and deletes documents
 implement an update by using a patch operation
 manage multi-document transactions using SDK Transactional Batch
 perform a multi-document load using SDK Bulk
 implement optimistic concurrency control using ETags
 implement session consistency by using session tokens
 implement a query operation that includes pagination
 implement a query operation by using a continuation token
 handle transient errors and 429s
 specify TTL for a document
 retrieve and use query metrics

Implement server-side programming in Azure Cosmos DB Core API by using JavaScript

 write, deploy, and call a stored procedure
 design stored procedures to work with multiple items transactionally
 implement triggers
 implement a user-defined function

Design and Implement Data Distribution (5–10%)
Design and implement a replication strategy for Azure Cosmos DB
 choose when to distribute data
 define automatic failover policies for regional failure for Azure Cosmos DB Core API
 perform manual failovers to move single master write regions
 choose a consistency model
 identify use cases for different consistency models
 evaluate the impact of consistency model choices on availability and associated RU cost
 evaluate the impact of consistency model choices on performance and latency
 specify application connections to replicated data

Design and implement multi-region write
 choose when to use multi-region write
 implement multi-region write
 implement a custom conflict resolution policy for Azure Cosmos DB Core API
Integrate an Azure Cosmos DB Solution (5–10%)
Enable Azure Cosmos DB analytical workloads

 enable Azure Synapse Link
 choose between Azure Synapse Link and Spark Connector
 enable the analytical store on a container
 enable a connection to an analytical store and query from Azure Synapse Spark or Azure Synapse SQL
 perform a query against the transactional store from Spark
 write data back to the transactional store from Spark

Implement solutions across services

 integrate events with other applications by using Azure Functions and Azure Event Hubs
 denormalize data by using Change Feed and Azure Functions
 enforce referential integrity by using Change Feed and Azure Functions
 aggregate data by using Change Feed and Azure Functions, including reporting
 archive data by using Change Feed and Azure Functions
 implement Azure Cognitive Search for an Azure Cosmos DB solution

Optimize an Azure Cosmos DB Solution (15–20%)
Optimize query performance in Azure Cosmos DB Core API

 adjust indexes on the database
 calculate the cost of the query
 retrieve request unit cost of a point operation or query
 implement Azure Cosmos DB integrated cache

Design and implement change feeds for an Azure Cosmos DB Core API

 develop an Azure Functions trigger to process a change feed
 consume a change feed from within an application by using the SDK
 manage the number of change feed instances by using the change feed estimator
 implement denormalization by using a change feed
 implement referential enforcement by using a change feed
 implement aggregation persistence by using a change feed
 implement data archiving by using a change feed

Define and implement an indexing strategy for an Azure Cosmos DB Core API
 choose when to use a read-heavy versus write-heavy index strategy
 choose an appropriate index type
 configure a custom indexing policy by using the Azure portal
 implement a composite index
 optimize index performance

Maintain an Azure Cosmos DB Solution (25–30%)
Monitor and troubleshoot an Azure Cosmos DB solution
 evaluate response status code and failure metrics
 monitor metrics for normalized throughput usage by using Azure Monitor
 monitor server-side latency metrics by using Azure Monitor
 monitor data replication in relation to latency and availability
 configure Azure Monitor alerts for Azure Cosmos DB
 implement and query Azure Cosmos DB logs
 monitor throughput across partitions
 monitor distribution of data across partitions
 monitor security by using logging and auditing

Implement backup and restore for an Azure Cosmos DB solution

 choose between periodic and continuous backup
 configure periodic backup
 configure continuous backup and recovery
 locate a recovery point for a point-in-time recovery
 recover a database or container from a recovery point

Implement security for an Azure Cosmos DB solution
 choose between service-managed and customer-managed encryption keys
 configure network-level access control for Azure Cosmos DB
 configure data encryption for Azure Cosmos DB
 manage control plane access to Azure Cosmos DB by using Azure role-based access control (RBAC)
 manage data plane access to Azure Cosmos DB by using keys
 manage data plane access to Azure Cosmos DB by using Azure Active Directory
 configure Cross-Origin Resource Sharing (CORS) settings
 manage account keys by using Azure Key Vault
 implement customer-managed keys for encryption
 implement Always Encrypted

Implement data movement for an Azure Cosmos DB solution

 choose a data movement strategy
 move data by using client SDK bulk operations
 move data by using Azure Data Factory and Azure Synapse pipelines
 move data by using a Kafka connector
 move data by using Azure Stream Analytics
 move data by using the Azure Cosmos DB Spark Connector

Implement a DevOps process for an Azure Cosmos DB solution
 choose when to use declarative versus imperative operations
 provision and manage Azure Cosmos DB resources by using Azure Resource Manager templates (ARM templates)
 migrate between standard and autoscale throughput by using PowerShell or Azure CLI
 initiate a regional failover by using PowerShell or Azure CLI
 maintain index policies in production by using ARM templates

QUESTION 1
You are troubleshooting the current issues caused by the application updates.
Which action can address the application updates issue without affecting the functionality of the application?

A. Enable time to live for the con-product container.
B. Set the default consistency level of account1 to strong.
C. Set the default consistency level of account1 to bounded staleness.
D. Add a custom indexing policy to the con-product container.

Answer: C

QUESTION 2
You need to select the partition key for con-iot1. The solution must meet the IoT telemetry requirements.
What should you select?

A. the timestamp
B. the humidity
C. the temperature
D. the device ID

Answer: D

QUESTION 3
You have an application named App1 that reads the data in an Azure Cosmos DB Core (SQL) API account.
App1 runs the same read queries every minute. The default consistency level for the account is set to eventual.
You discover that every query consumes request units (RUs) instead of using the cache.
You verify the IntegratedCacheiteItemHitRate metric and the IntegratedCacheQueryHitRate metric. Both metrics have values of 0.
You verify that the dedicated gateway cluster is provisioned and used in the connection string.
You need to ensure that App1 uses the Azure Cosmos DB integrated cache.
What should you configure?

A. the indexing policy of the Azure Cosmos DB container
B. the consistency level of the requests from App1
C. the connectivity mode of the App1 CosmosClient
D. the default consistency level of the Azure Cosmos DB account

Answer: C

Examkingdom Microsoft DP-420 Exam pdf, Certkingdom Microsoft DP-420 PDF

MCTS Training, MCITP Trainnig

Best Microsoft DP-420 Certification, Microsoft DP-420 Training at certkingdom.com

Google Certified Professional Cloud Architect (GCP) Exam

Length: 2 hours
Registration fee: $200 (plus tax where applicable)
Languages: English, Japanese
Exam format: Multiple choice and multiple select, taken remotely or in person at a test center. Locate a test center near you.

Professional Cloud Architects enable organizations to leverage Google Cloud technologies. With a thorough understanding of cloud architecture and Google Cloud, they design, develop, and manage robust, secure, scalable, highly available, and dynamic solutions to drive business objectives.

The Professional Cloud Architect certification exam assesses your ability to:
Design and plan a cloud solution architecture
Manage and provision the cloud solution infrastructure
Design for security and compliance
Analyze and optimize technical and business processes
Manage implementations of cloud architecture
Ensure solution and operations reliability

Exam delivery method:
a. Take the online-proctored exam from a remote location
b. Take the onsite-proctored exam at a testing center
Prerequisites: None

Recommended experience: 3+ years of industry experience including 1+ years designing and managing solutions using Google Cloud

Certification Renewal / Recertification: Candidates must recertify in order to maintain their certification status. Unless explicitly stated in the detailed exam descriptions, all Google Cloud certifications are valid for two years from the date of certification. Recertification is accomplished by retaking the exam during the recertification eligibility time period and achieving a passing score. You may attempt recertification starting 60 days prior to your certification expiration date.

The exam guide contains a complete list of topics that may be included on the exam, helping you determine if your skills align with the exam.
2. Start training

Professional Cloud Architect

Certification exam guide
A Google Cloud Certified Professional Cloud Architect enables organizations to leverage Google Cloud technologies. Through an understanding of cloud architecture and Google technology, this individual designs, develops, and manages robust, secure, scalable, highly available, and dynamic solutions to drive business objectives. The Cloud Architect should be proficient in all aspects of enterprise cloud strategy, solution design, and architectural best practices. The Cloud Architect should also be experienced in software development methodologies and approaches including multi-tiered distributed applications which span multicloud or hybrid environments.

Case studies
During the exam for the Cloud Architect Certification, some of the questions may refer you to a case study that describes a fictitious business and solution concept. These case studies are intended to provide additional context to help you choose your answer(s). Review the case studies that may be used in the exam.

EHR Healthcare
Helicopter Racing League
Mountkirk Games
TerramEarth

Section 1. Designing and planning a cloud solution architecture

1.1 Designing a solution infrastructure that meets business requirements. Considerations include:
Business use cases and product strategy
Cost optimization
Supporting the application design
Integration with external systems
Movement of data
Design decision trade-offs
Build, buy, modify, or deprecate
Success measurements (e.g., key performance indicators [KPI], return on investment [ROI], metrics)
Compliance and observability

1.2 Designing a solution infrastructure that meets technical requirements. Considerations include:
High availability and failover design
Elasticity of cloud resources with respect to quotas and limits
Scalability to meet growth requirements
Performance and latency

1.3 Designing network, storage, and compute resources. Considerations include:
Integration with on-premises/multicloud environments
Cloud-native networking (VPC, peering, firewalls, container networking)
Choosing data processing technologies
Choosing appropriate storage types (e.g., object, file, databases)
Choosing compute resources (e.g., preemptible, custom machine type, specialized workload)
Mapping compute needs to platform products

1.4 Creating a migration plan (i.e., documents and architectural diagrams). Considerations include:
Integrating solutions with existing systems
Migrating systems and data to support the solution
Software license mapping
Network planning
Testing and proofs of concept
Dependency management planning

1.5 Envisioning future solution improvements. Considerations include:
Cloud and technology improvements
Evolution of business needs
Evangelism and advocacy

Section 2. Managing and provisioning a solution infrastructure

2.1 Configuring network topologies. Considerations include:
Extending to on-premises environments (hybrid networking)
Extending to a multicloud environment that may include Google Cloud to Google Cloud communication
Security protection (e.g. intrusion protection, access control, firewalls)

2.2 Configuring individual storage systems. Considerations include:
Data storage allocation
Data processing/compute provisioning
Security and access management
Network configuration for data transfer and latency
Data retention and data life cycle management
Data growth planning

2.3 Configuring compute systems. Considerations include:
Compute resource provisioning
Compute volatility configuration (preemptible vs. standard)
Network configuration for compute resources (Google Compute Engine, Google Kubernetes Engine, serverless networking)
Infrastructure orchestration, resource configuration, and patch management
Container orchestration

Section 3. Designing for security and compliance

3.1 Designing for security. Considerations include:
Identity and access management (IAM)
Resource hierarchy (organizations, folders, projects)
Data security (key management, encryption, secret management)
Separation of duties (SoD)
Security controls (e.g., auditing, VPC Service Controls, context aware access, organization policy)
Managing customer-managed encryption keys with Cloud Key Management Service
Remote access

3.2 Designing for compliance. Considerations include:
Legislation (e.g., health record privacy, children’s privacy, data privacy, and ownership)
Commercial (e.g., sensitive data such as credit card information handling, personally identifiable information [PII])
Industry certifications (e.g., SOC 2)
Audits (including logs)

Section 4. Analyzing and optimizing technical and business processes

4.1 Analyzing and defining technical processes. Considerations include:
Software development life cycle (SDLC)
Continuous integration / continuous deployment
Troubleshooting / root cause analysis best practices
Testing and validation of software and infrastructure
Service catalog and provisioning
Business continuity and disaster recovery

4.2 Analyzing and defining business processes. Considerations include:
Stakeholder management (e.g. influencing and facilitation)
Change management
Team assessment / skills readiness
Decision-making processes
Customer success management
Cost optimization / resource optimization (capex / opex)

4.3 Developing procedures to ensure reliability of solutions in production (e.g., chaos engineering, penetration testing)

Section 5. Managing implementation

5.1 Advising development/operation team(s) to ensure successful deployment of the solution. Considerations include:
Application development
API best practices
Testing frameworks (load/unit/integration)
Data and system migration and management tooling

5.2 Interacting with Google Cloud programmatically. Considerations include:
Google Cloud Shell
Google Cloud SDK (gcloud, gsutil and bq)
Cloud Emulators (e.g. Cloud Bigtable, Datastore, Spanner, Pub/Sub, Firestore)

Section 6. Ensuring solution and operations reliability

6.1 Monitoring/logging/profiling/alerting solution

6.2 Deployment and release management

6.3 Assisting with the support of deployed solutions

6.4 Evaluating quality control measures



QUESTION 1
The JencoMart security team requires that all Google Cloud Platform infrastructure is deployed using a least
privilege model with separation of duties for administration between production and development resources.
What Google domain and project structure should you recommend?

A. Create two G Suite accounts to manage users: one for development/test/staging and one for production.
Each account should contain one project for every application
B. Create two G Suite accounts to manage users: one with a single project for all development applications
and one with a single project for all production applications
C. Create a single G Suite account to manage users with each stage of each application in its own project
D. Create a single G Suite account to manage users with one project for the development/test/staging
environment and one project for the production environment

Answer: D

QUESTION 2
A few days after JencoMart migrates the user credentials database to Google Cloud Platform and shuts down
the old server, the new database server stops responding to SSH connections. It is still serving database
requests to the application servers correctly.
What three steps should you take to diagnose the problem? (Choose three.)

A. Delete the virtual machine (VM) and disks and create a new one
B. Delete the instance, attach the disk to a new VM, and investigate
C. Take a snapshot of the disk and connect to a new machine to investigate
D. Check inbound firewall rules for the network the machine is connected to
E. Connect the machine to another network with very simple firewall rules and investigate
F. Print the Serial Console output for the instance for troubleshooting, activate the interactive console, and investigate

Answer: C,D,F

QUESTION 3
JencoMart has decided to migrate user profile storage to Google Cloud Datastore and the application servers
to Google Compute Engine (GCE). During the migration, the existing infrastructure will need access to Datastore to upload the data.
What service account key-management strategy should you recommend?

A. Provision service account keys for the on-premises infrastructure and for the GCE virtual machines (VMs)
B. Authenticate the on-premises infrastructure with a user account and provision service account keys for the VMs
C. Provision service account keys for the on-premises infrastructure and use Google Cloud Platform (GCP) managed keys for the VMs
D. Deploy a custom authentication service on GCE/Google Kubernetes Engine (GKE) for the on-premises infrastructure and use GCP managed keys for the VMs

Answer: C

Examkingdom Google Certified Professional Cloud Architect Exam pdf, Certkingdom Google Certified Professional Cloud Architect PDF

MCTS Training, MCITP Trainnig

Best Google Certified Professional Cloud Architect Certification, Google Certified Professional Cloud Architect Training at certkingdom.com

C_TS4FI_1909 SAP Certified Application Associate – SAP S/4HANA for Financial Accounting Associates (SAP S/4HANA 1909) Exam

Delivery Methods: SAP Certification
Level: Associate
Exam: 80 questions
Cut Score: 57%
Duration: 180 mins
Languages: German, English, Spanish, French, Japanese, Korean, Russian, Chinese

Description
The “SAP Certified Application Associate – SAP S/4HANA for Financial Accounting Associates (SAP S/4HANA 1909)” certification exam verifies that the candidate possesses fundamental knowledge and proven skills in the area of SAP S/4HANA Financial Accounting. It tests that the candidate has a good overall understanding within this consultant profile and can implement this knowledge practically in projects under the guidance of an experienced consultant. It is recommended as an entry-level qualification to allow consultants to get acquainted within Financial Accounting projects. This certificate is the ideal starting point for a professional career as a Financial Accounting consultant on SAP S/4HANA. If experience in SAP implementation projects of Financial Accounting are added over years, a professional career can be validated by taking a second exam: “SAP Certified Application Professional – Financials in SAP S/4HANA for SAP ERP Financials experts”.

Notes
To ensure success, SAP recommends combining education courses and hands-on experience to prepare for your certification exam as questions will test your ability to apply the knowledge you have gained in training.
You are not allowed to use any reference materials during the certification test (no access to online documentation or to any SAP system).

Topic Areas
Please see below the list of topics that may be covered within this certification and the courses that cover them. Its accuracy does not constitute a legitimate claim; SAP reserves the right to update the exam content (topics, items, weighting) at any time.

Topic Areas
Please see below the list of topics that may be covered within this certification and the courses that cover them. Its accuracy does not constitute a legitimate claim; SAP reserves the right to update the exam content (topics, items, weighting) at any time.

Financial Closing > 12%
Perform month and year-end closing in Financial Accounting (exchange rate valuation, post provisions etc.), create balance sheet, create profit and loss statements, monitor closing operations using the Financial Closing Cockpit, manage accruals, and manage posting periods.

TS4F02 (SAP S/4HANA 1909)

—– OR —–

S4F15 (SAP S/4HANA 1909)
S4F17 (SAP S/4HANA 1909)

General Ledger Accounting > 12%

Create and maintain general ledger accounts, exchange rates, bank master data and define house banks. Create and reverse general ledger transfer postings, post cross-company code transactions, create profit centers and segments. Clear an account and define and use a chart of accounts. Maintain tolerances, tax codes, and post documents with document splitting.

TS4F01 (SAP S/4HANA 1909)

—– OR —–

S4F12 (SAP S/4HANA 1909)
S4F13 (SAP S/4HANA 1909)

Accounts Payable & Accounts Receivable > 12%

Create and maintain business partners, post invoices and payments and use special g/l transactions, reverse invoices and payments, block open invoices for payment, configure the payment program, and manage partial payments. Define the customizing settings for the Payment Medium Workbench, use the debit balance check for handling payments, define terms of payment and payment types, explain the connection of customers to vendors, describe integration with procurement and sales.

TS4F01 (SAP S/4HANA 1909)

—– OR —–

S4F12 (SAP S/4HANA 1909)
S4F13 (SAP S/4HANA 1909)

Asset Accounting > 12%
Create and maintain charts of depreciation and the depreciation areas, asset classes, asset master data, and configure and perform FI-AA business processes in the SAP system. Set up valuation and depreciation, perform periodic and year-end closing processes, and explain and configure parallel accounting.

TS4F02 (SAP S/4HANA 1909)

—– OR —–

S4F15 (SAP S/4HANA 1909)
S4F17 (SAP S/4HANA 1909)

Organizational Assignments and Process Integration > 12%
Manage Organizational Units, currencies, configure Validations and Document Types, utilize Reporting Tools, configure Substitutions, and manage Number ranges.

TS4F01 (SAP S/4HANA 1909)

—– OR —–

S4F12 (SAP S/4HANA 1909)
S4F13 (SAP S/4HANA 1909)

Overview and Deployment of SAP S/4HANA < 8%
Explain the SAP HANA Architecture and describe the SAP S/4HANA scope and deployment options.

TS4F01 (SAP S/4HANA 1909)

—– OR —–

S4F12 (SAP S/4HANA 1909)
S4F13 (SAP S/4HANA 1909)

General Information

Exam Preparation
All SAP consultant certifications are available as Cloud Certifications in the Certification Hub and can be booked with product code CER006. With CER006 – SAP Certification in the Cloud, you can take up to six exams attempts of your choice in one year – from wherever and whenever it suits you! Test dates can be chosen and booked individually.

Each specific certification comes with its own set of preparation tactics. We define them as “Topic Areas” and they can be found on each exam description. You can find the number of questions, the duration of the exam, what areas you will be tested on, and recommended course work and content you can reference.

Certification exams might contain unscored items that are being tested for upcoming releases of the exam. These unscored items are randomly distributed across the certification topics and are not counted towards the final score. The total number of items of an examination as advertised in the Training Shop is never exceeded when unscored items are used.

Please be aware that the professional- level certification also requires several years of practical on-the-job experience and addresses real-life scenarios.

For more information refer to our FAQs.
SAP Global Certification FAQ – Overview
SAP Global Certification FAQ – Exam Process
SAP Global Certification FAQ – Post-Exam Process

Safeguarding the Value of Certification
SAP Education has worked hard together with the Certification & Enablement Influence Council to enhance the value of certification and improve the exams. An increasing number of customers and partners are now looking towards certification as a reliable benchmark to safeguard their investments. Unfortunately, the increased demand for certification has brought with it a growing number of people who to try and attain SAP certification through unfair means. This ongoing issue has prompted SAP Education to place a new focus on test security. Please take a look at our post to understand what you can do to help to protect the credibility of your certification status.

Our Certification Test Security Guidelines will help you as test taker to understand the testing experience.


QUESTION 1
You are asked to explain how assets under construction work in SAP S/4HANA.
What should you highlight? (Choose two.)

A. It is impossible to use assets under construction with Investment Management
B. It is possible to calculate and post depreciation in the balance sheet depreciation area for assets under construction
C. It is possible to post credit memos, even after assets under construction are fully capitalized
D. It is possible to post special tax depreciation and investment support for assets under construction

Answer: C,D
Section: Asset Accounting
Explanation
Explanation/Reference:
Reference: https://www.sapdocs.info/2246/asset-under-construction-auc-in-sap/


QUESTION 2
Which date is used to determine the depreciation start date?

A. Value date
B. System date
C. Document date
D. Posting date

Answer: A
Section: Asset Accounting
Explanation
Explanation/Reference:
Reference: https://wiki.scn.sap.com/wiki/display/ERPFI/Asset+Value+Date+and+Ordinary+Depreciation+Start+Date


QUESTION 3
You are configuring asset-related postings of deprecation areas in Asset Accounting.
Which of the following settings is NOT permitted?

A. Area posts depreciation only
B. Area posts revaluation only
C. Area posts in real time
D. Area does not post

Answer: B

Examkingdom SAP C_TS4FI_1909 Exam pdf, Certkingdom SAP C_TS4FI_1909 PDF

MCTS Training, MCITP Trainnig

Best SAP C_TS4FI_1909 Certification, SAP C_TS4FI_1909 Training at certkingdom.com

3V0-32.21 Advanced Design VMware Cloud Management and Automation Exam

EXAM NUMBER : 3V0-32.21
PRODUCT : VMware vRealize Cloud Universal
EXAM LANGUAGE : English
Associate Certifications : VCAP-CMA Design 2021
Duration : 145 minutes
Number of Questions : 60
Passing Score : 300 (scaled) Learn more
Format : Single and Multiple Choice, Proctored

EXAM OVERVIEW
The Advanced Design VMware Cloud Management and Automation exam tests a candidate’s ability to design a VMware Cloud Management solution by validating product and technical knowledge as well as the ability to analyze, evaluate, and create solutions.

Exam Details (Last Updated: 02/27/2021)
The Advanced Design VMware Cloud Management and Automation (3V0-32.21) exam, which leads to VMware Certified Advanced Professional – Cloud Management and Automation Design 2021 certification is a 60-item exam, with a passing score of 300 using a scaled method. Candidates are given a time of 145 minutes, which includes adequate time to complete the exam for non-native English speakers.

Exam Delivery

This is a proctored exam delivered through Pearson VUE. For more information, visit the Pearson VUE website.

Certification Information

For details and a complete list of requirements and recommendations for attainment, please reference the VMware Education Services – Certification website.

Minimally Qualified Candidate
The Minimally Qualified Candidate (MQC) is a conceptualization of the certification candidate that possesses the minimum knowledge, skills, experience, and competence to meet our expectations of a credentialed individual. The MQC achieving the VMware Certified Advanced Professional – Cloud Management and Automation Design 2021 (VCAP-CMA Design) certification is capable of developing a conceptual design given a set of customer requirements, determining the functional requirements needed to create a logical design, and architecting a physical design using these elements. They are typically designers or architects, capable of translating business requirements into a cloud management solution using vRealize Suite components. The successful candidate can design a solution that integrates vRealize Suite Lifecycle Manager, vRealize Automation, vRealize Orchestrator, vRealize Operations, and vRealize Log Insight. The successful candidate possesses an understanding of public and private cloud design concepts, including multi-tenancy, governance and compliance. The successful candidate usually has five or more years of general IT experience and at least two years’ experience designing vRealize cloud management solution in accordance with VMware recommended practices. The candidate will likely hold one or more industry-recognized general IT certifications. The candidate holds a VMware Certified Professional – Cloud Management and Automation certification and demonstrates the knowledge contained in the VCAP-CMA Design exam blueprint.

Exam Sections
VMware exam blueprint sections are now standardized to the seven sections below, some of which may NOT be included in the final exam blueprint depending on the exam objectives.
Section 1 – Architecture and Technologies
Section 2 – Products and Solutions
Section 3 – Planning and Designing
Section 4 – Installing, Configuring, and Setup
Section 5 – Performance-tuning, Optimization, and Upgrades
Section 6 – Troubleshooting and Repairing
Section 7 – Administrative and Operational Tasks


If a section does not have testable objectives in this version of the exam, it will be noted below, accordingly. The objective numbering may be referenced in your score report at the end of your testing event for further preparation should a retake of the exam be necessary.

Sections Included in this Exam
Section 1 – Architecture and Technologies
Objective 1.1 – Differentiate between architecting a solution on SaaS vs on-prem (i.e. cloud proxy)
Objective 1.2 – Differentiate between business and technical requirements
Objective 1.3 – Differentiate conceptual, logical and detailed design
Objective 1.4 – Differentiate between Availability, Manageability, Performance, Recoverability and Security (AMPRS)

Section 2 – Products and Solutions
Objective 2.1 – Understand the design principles of VVD
Objective 2.2 – Understand the design of vRealize Cloud Management solutions
Objective 2.3 – Understand the design of VMware Cloud Foundation

Section 3 – Planning and Designing

Objective 3.1 – Gather/Analyze requirements- Business/Technical
Objective 3.2 – Determine risks, constraints, and assumptions for a design
Objective 3.3 – Create a Conceptual Design
Objective 3.4 – Create a vRealize Suite Lifecycle Manager Logical Design
Objective 3.5 – Create a VMware Identity Manager Logical Design
Objective 3.6 – Create a vRealize Automation Logical Design
Objective 3.7 – Create a vRealize Operations Logical Design

3.7.1 Determine the architecture of a vROps environment based on requirements
Objective 3.8 – Create vRealize Log Insight Logical design
Objective 3.9 – Create a vRealize Cloud Automation Extensibility Design (Orchestrator/ABX)
Objective 3.10– Create a VMware Vrealize Sutie Lifecycle Manager Physical Design
Objective 3.11 – Create a VMware Identity Manager Physical Design
Objective 3.12 – Create a vRealize Automation Physical Design

3.12.1 Determine the appropriate tenancy model based on requirements
3.12.2 Determine the appropriate RBAC design based on requirements


Objective 3.13 – Create a vRealize Operations Physical Design
3.13.1 Determine the appropriate RBAC design based on requirements
3.13.2 Determine the appropriate Operational & Business Intent configuration based on requirements
3.13.3 Determine the size of a vROps Cluster based on requirements
3.13.4 Determine the architecture of a vROps environment based on requirements
3.13.5 Best Practices for vROps

Objective 3.14 – Create vRealize Log Insight Physical Design
3.14.1 Determine the appropriate RBAC design based on requirements
3.14.2 Determine the size of a vRLI Cluster based on requirements
3.14.3 Determine the architecture of a vRLI environment based on requirements

Objective 3.15 – Create a vRealize Automation Tenant Design (Cloud Assembly)
3.15.1 Determine the appropriate Cloud Account configuration based on requirements
3.15.2 Determine the appropriate Integration configuration based on requirements
3.15.3 Determine the appropriate Project design/confiugration based on requirements
3.15.4 Determine the appropriate Image Mapping design based on requirements
3.15.5 Determine the appropriate Flavor Mapping design based on requirements
3.15.6 Determine the appropriate Storage Profile design based on requirements
3.15.7 Determine the appropriate Network Profile design based on requirements

Objective 3.16 – Create a vRealize Automation Service Catalog Design (Service Broker)

3.16.1 Determine the appropriate Content based on requirements

Objective 3.17 – Determine the vRealize Suite Integrations based on requirements
3.17.1 vROps – vRA integration (Workload Placement, Health and Pricing)
3.17.2 vROps – vRLI Integrations (Alerts)

Section 4 – There are no testable objectives for this section.
Section 5 – There are no testable objectives for this section.
Section 6 – There are no testable objectives for this section.

QUESTION 1
A cloud architect is writing an implementation plan for deployment of a clustered vRealize Automation
deployment that will use a third-party load balancer.
Which two steps should the architect include within the implementation plan to ensure a successful
deployment of vRealize Automation? (Choose two.)

A. Disable all secondary nodes from the load balancer pools
B. Create and configure the monitoring of vRealize Automation and vRealize Orchestrator
C. Ensure all the SAN certificates for vRealize Suite are available
D. Enable all non-primary nodes on the load balancer
E. Turn off the health monitors or change them temporarily to default to ICMP and ensure traffic is still forwarding to the primary node

Answer: B,C

QUESTION 2
Which two statements are correct for VMware Cloud Services cloud proxy? (Choose two.)

A. It helps connect to public cloud entities.
B. It connects cloud services to on-premises networks.
C. An OVA deployment to an ESXi server is supported.
D. It requires a load balancer.
E. An OVA must be deployed on a vCenter Server.

Answer: B,E

QUESTION 3
A customer requests a conceptual design for the Cloud Management solution.
What three selections represent a conceptual design? (Choose three.)

A. Cloud Automation
B. Identity Management
C. vRealize Automation
D. Workspace One Access
E. Monitoring and Logging
F. vRealize Log Insight

Answer: A,B,E

Examkingdom VMware 3V0-32.21 Exam pdf, Certkingdom VMware 3V0-32.21 PDF

MCTS Training, MCITP Trainnig

Best VMware 3V0-32.21 Certification, VMware 3V0-32.21 Training at certkingdom.com