Answers & Resources included 1 official exam = (60 questions – 120 mins) 1 practice test = (60 questions – 120 mins) 1 practice test = (60 questions – 120 mins) 1 practice test = (41 questions – 120 mins) 1 practice test = (188 questions – 120 mins) Total number of questions = 308 questions
– if you solve these tests, you will be able to solve the official exam easily. because these tests have the same difficulty of the official exam or higher (slightly higher difficulty) to strengthen you against the official exam
Points covered in the Exams (Same points of the official exam): ● Create a technical design (10-15%) ● Configure Microsoft Dataverse (15-20%) ● Create and configure Power Apps (15-20%) ● Configure business process automation (5-10%) ● Extend the user experience (10-15%) ● Extend the platform (15-20%) ● Develop integrations (5-10%)
The Objectives are distributed among the practice tests with answers and detailed explanations and resource links – (If available) for every question These practice tests allow you to attempt Four Times within the same duration of the official exam with the same topics to adjust your focus-areas and cut down on study time and prove your readiness to take the official exam. you can use all Four Exams in whatever way suits your preparation workflow.
These exams are time-limited exactly like the official exam to make you stronger at solving the exam.
Different types of questions including scenario-based questions and normal multiple-choice questions similar to the questions you find in the official exam.
Hotspot questions and the most common questions which are more likely to be found in the official exam.
Join students who found these practice tests useful and sent me a thank you message after passing the official exam.
What you’ll learn Learn answers to 308 Questions distributed on 4 Official-like Practice Exams understand Hotspot questions and the most common questions which are more likely to be found in the official exam. Solving all Exam questions before the time ends Take & pass the Exam PL-400: Microsoft Power Platform Developer Are there any course requirements or prerequisites? Basic knowledge of Microsoft Power Platform Developer Every question is answered with detailed explanation and resources
Who this course is for: Students want to pass Exam PL-400: Microsoft Power Platform Developer Students who want to train on Official-like Practice certification Exams
Question 1: You are building a custom application in Azure to process resumes for the HR department. The app must monitor submissions of resumes. You need to parse the resumes and save contact and skills information into the Common Data Service. Which mechanism should you use?
A. Power Automate B. Common Data Service plug-in C. Web API D. Custom workflow activity
Correct Answer: A
Explanation Improve operational efficiency with a unified view of business data by creating flows that use Dataverse (Common Data Service has been renamed to Microsoft Dataverse as of November 2020). For example, you can use Dataverse within Power Automate in these key ways: Create a flow to import data, export data, or take action (such as sending a notification) when data changes. Instead of creating an approval loop through email, create a flow that stores approval state in an entity, and then build a custom app in which users can approve or reject items. Reference: https://docs.microsoft.com/en-us/power-automate/common-data-model-intro
Question 2: A company manages capital equipment for an electric utility company. The company has a SQL Server database that contains maintenance records for the equipment. Technicians who service the equipment use the Dynamics 365 Field Service mobile app on tablet devices to view scheduled assignments. Technicians use a canvas app to display the maintenance history for each piece of equipment and update the history. Managers use a Power BI dashboard that displays Dynamics 365 Field Service and real-time maintenance data. Due to increasing demand, managers must be able to work in the field as technicians. You need to design a solution that allows the managers to work from one single screen. What should you do?
A. Add the maintenance history app to the Field Service Mobile app. B. Add the manager Power BI dashboard to the Field Service mobile app. C. Create a new maintenance canvas app from within the Power BI management dashboard. D. Add the maintenance history app to the Power BI dashboard.
Correct Answer: d
Explanation Power BI enables data insights and better decision-making, while Power Apps enables everyone to build and use apps that connect to business data. Using the Power Apps visual, you can pass context-aware data to a canvas app, which updates in real time as you make changes to your report. Now, your app users can derive business insights and take actions from right within their Power BI reports and dashboards. Reference: https://docs.microsoft.com/en-us/powerapps/maker/canvas-apps/powerapps-custom-visual
Question 3: A company has an application that provides API access. You plan to connect to the API from a canvas app by using a custom connector. You need to request information from the API developers so that you can create the custom connector. Which two types of files can you use? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.
A. YAML B. WSDL C. OpenAPI definition D. Postman collection
Correct Answer: C,D
Explanation OpenAPI definitions or Postman collections can be used to describe a custom connector. Reference: https://docs.microsoft.com/en-us/connectors/custom-connectors/faq
Question 4: You plan to create a canvas app to manage large sets of records. Users will filter and sort the data. You must implement delegation in the canvas app to mitigate potential performance issues. You need to recommend data sources for the app. Which two data sources should you recommend? Each correct answer presents a complete solution.
A. SQL Server B. Common Data Service plug-in C. Azure Data Factory D. Azure Table Storage Correct Answer: A,C
Explanation When you are creating reports from large data sources (perhaps millions of records), you want to minimize network traffic. Working with large data sets requires using data sources and formulas that can be delegated. It’s the only way to keep your app performing well and ensure users can access all the information they need. Delegation is supported for certain tabular data sources only. These tabular data sources are the most popular, and they support delegation: ✑ Common Data Service ✑ SharePoint ✑ SQL Server Reference: https://docs.microsoft.com/en-us/powerapps/maker/canvas-apps/delegation-overview
Question 5: A company plans to create an order processing app. When orders are created, the app will perform complex business logic and integrate with several external systems. Orders that have a large number of line items may take up to six minutes to complete. Processing for each order must be completed in one operation to avoid leaving records in an incomplete state. You need to recommend a solution for the company. What should you recommend?
A. an asynchronous workflow that uses a custom workflow activity B. a real-time workflow that uses a custom action C. a webhook that connects to an Azure Function D. an asynchronous plug-in
Correct Answer: B
Explanation Real-time Workflows roll back all changes if it fails. As the Workflow is going through the process itself, if it fails, it will roll back all of the prior steps taken. Incorrect Answers: A: With Background Workflows, actions will not roll back if it fails. All changes are up-to-date until the failure occurs. The workflow will stop at this point due to the failure. Reference: https://ledgeviewpartners.com/blog/what-are-the-differences-between-real-time-and-background-workflows-in-microsoft-dynamics-365-crm/
Are you looking to prepare yourself for the CompTIA Security+ (SY0-601) exam? Find out by testing yourself with this course
Each of the practice tests in this set provide an entire exam’s worth of questions, enabling you to confirm your mastery of the topics and providing you with the confidence you’ll need to take your CompTIA Security+ exam. There are 80 questions in each practice test, with a total of questions.
Not sure which practice test course to choose on Check out our features and benefits:
FEATURES & BENEFITS – More practice for studying
Each test has 80 questions, is 90 minutes long, passing grade of 80, domains weighted like the exam – Practice like the real CompTIA Security+ exam
Wrong answers linked to the domain they came from – Targeted studying, don’t waste time studying for topics you already know
Pause or stop the exam whenever you like – Practice on your own time, at your own pace
Detailed explanation of the answer – Better understanding of the content, also understand why the wrong answers are incorrect
Exam questions are written by a panel of experienced test writers* – Know you’re getting relevant, well-written exam questions that mimic the real exam
WHAT’S COVERED? Domain 1.0 – Attacks, Threats, and Vulnerabilities (24%) Domain 2.0 – Architecture and Design (21%) Domain 3.0 – Implementation (25%) Domain 4.0 – Operations and Incident Response (16%) Domain 5.0 – Governance, Risk, and Compliance (14%)
HOW DO I TAKE THE COMPTIA SECURITY+ EXAM? Once you’re ready to take the CompTIA Security+ exam, you must first purchase an exam voucher. (Pro tip: Be sure to search for a discounted voucher!) You will then need to create or login into your account at home.pearsonvue. com, select the CompTIA Security+ exam, and enter the unique ID from your exam voucher.
Next, you will go to a page where you can sign up to take the exam in-person at an authorized PearsonVue Testing Center in your area, or you can sign up for an at-home testing experience using OnVUE.
The last step is to take and pass the exam. Be sure to let me know when you pass, I love to hear about my students’ success!
*Practice test questions are drawn from the McGraw-Hill Mike Meyers’ CompTIA Security+ Certification Guide, Third Edition (Exam SY0-601)
What you’ll learn What types of questions you’ll see on the real exam Which exam domains you need to spend more time studying How you can most efficiently prepare for the exam When you’re ready for the exam by passing these practice tests consistently with a 90-95%
Are there any course requirements or prerequisites? This practice test course is designed for anyone who wants to make sure they are ready to pass the CompTIA Security+ SY0-601 exam CompTIA recommends having your CompTIA Network+ certification (or equivalent) and two years of experience in IT administration with a security focus before sitting for this exam
Who this course is for: This practice test course is for anyone who is preparing to take the CompTIA Security+ Certification (SY0-601) exam and wants to test their knowledge and make sure they are ready to pass the real CompTIA exam
QUESTION 1 Which of the following will MOST likely adversely impact the operations of unpatched traditional programmable-logic controllers, running a back-end LAMP server and OT systems with human-management interfaces that are accessible over the Internet via a web interface? (Choose two.)
A. Cross-site scripting B. Data exfiltration C. Poor system logging D. Weak encryption E. SQL injection F. Server-side request forgery
Answer: DF
QUESTION 2 A company recently transitioned to a strictly BYOD culture due to the cost of replacing lost or damaged corporate-owned mobile devices. Which of the following technologies would be BEST to balance the BYOD culture while also protecting the company?s data?
A. Containerization B. Geofencing C. Full-disk encryption D. Remote wipe
Answer: C
QUESTION 3 A Chief Security Office’s (CSO’s) key priorities are to improve preparation, response, and recovery practices to minimize system downtime and enhance organizational resilience to ransomware attacks. Which of the following would BEST meet the CSO’s objectives?
A. Use email-filtering software and centralized account management, patch high-risk systems, and restrict administration privileges on fileshares. B. Purchase cyber insurance from a reputable provider to reduce expenses during an incident. C. Invest in end-user awareness training to change the long-term culture and behavior of staff and executives, reducing the organization’s susceptibility to phishing attacks. D. Implement application whitelisting and centralized event-log management, and perform regular testing and validation of full backups. Answer: D
QUESTION 4 A network engineer has been asked to investigate why several wireless barcode scanners and wireless computers in a warehouse have intermittent connectivity to the shipping server. The barcode scanners and computers are all on forklift trucks and move around the warehouse during their regular use. Which of the following should the engineer do to determine the issue? (Choose two.)
A. Perform a site survey B. Deploy an FTK Imager C. Create a heat map D. Scan for rogue access points E. Upgrade the security protocols F. Install a captive portal
Answer: A,C
QUESTION 5 A security administrator suspects an employee has been emailing proprietary information to a competitor. Company policy requires the administrator to capture an exact copy of the employee?s hard disk. Which of the following should the administrator use?
A. dd B. chmod C. dnsenum D. logger
Answer: A
QUESTION 6 Which of the following is MOST likely to outline the roles and responsibilities of data controllers and data processors?
In order to set realistic expectations, please note: These questions are NOT official questions that you will find on the official exam. These questions DO cover all the material outlined in the knowledge sections below. Many of the questions are based on fictitious scenarios which have questions posed within them.
The official knowledge requirements for the exam are reviewed routinely to ensure that the content has the latest requirements incorporated in the practice questions. Updates to content are often made without prior notification and are subject to change at any time.
Each question has a detailed explanation and links to reference materials to support the answers which ensures accuracy of the problem solutions.
The questions will be shuffled each time you repeat the tests so you will need to know why an answer is correct, not just that the correct answer was item “B” last time you went through the test.
Candidates for this exam are database administrators and data management specialists that manage on-premises and cloud relational databases built with Microsoft SQL Server and Microsoft Azure Data Services.
The Azure Database Administrator implements and manages the operational aspects of cloud-native and hybrid data platform solutions built on Azure Data Services and SQL Server. The Azure Database Administrator uses a variety of methods and tools to perform day-to-day operations, including applying knowledge of using T-SQL for administrative management purposes.
This role is responsible for management, availability, security and performance monitoring and optimization of modern relational database solutions. This role works with the Azure Data Engineer role to manage operational aspects of data platform solutions.
Candidates for this role should understand all concepts covered in Exam DP-900: Microsoft Azure Data Fundamentals.
Skills measured on Microsoft Azure DP-300 Exam
Plan and Implement Data Platform Resources (15-20%)
Deploy resources by using manual methods deploy database offerings on selected platforms configure customized deployment templates apply patches and updates for hybrid and IaaS deployment
Recommend an appropriate database offering based on specific requirements evaluate requirements for the deployment evaluate the functional benefits/impact of possible database offerings evaluate the scalability of the possible database offering evaluate the HA/DR of the possible database offering evaluate the security aspects of the possible database offering
Configure resources for scale and performance configure Azure SQL Databases for scale and performance configure Azure SQL managed instances for scale and performance configure SQL Server in Azure VMs for scale and performance calculate resource requirements evaluate database partitioning techniques, such as database sharding set up SQL Data Sync
Evaluate a strategy for moving to Azure evaluate requirements for the migration evaluate offline or online migration strategies evaluate requirements for the upgrade evaluate offline or online upgrade strategies Implement a migration or upgrade strategy for moving to Azure implement an online migration strategy implement an offline migration strategy implement an online upgrade strategy implement an offline upgrade strategy
Implement a Secure Environment (15-20%)
Configure database authentication by using platform and database tools configure Azure AD authentication create users from Azure AD identities configure security principals
Configure database authorization by using platform and database tools configure database and object-level permissions using graphical tools apply principle of least privilege for all securables
Implement security for data at rest implement Transparent Data Encryption (TDE) implement object-level encryption implement Dynamic Data Masking implement Azure Key Vault and disk encryption for Azure VMs
Implement security for data in transit configure server and database-level firewall rules implement Always Encrypted Implement compliance controls for sensitive data apply a data classification strategy configure server and database audits implement data change tracking perform a vulnerability assessment
Monitor and Optimize Operational Resources (15-20%)
Monitor activity and performance prepare an operational performance baseline determine sources for performance metrics interpret performance metrics configure and monitor activity and performance at the infrastructure, server, service, and database levels
Identify performance-related issues configure Query Store to collect performance data identify sessions that cause blocking assess growth/fragmentation of databases and logs assess performance-related database configuration parameters
Configure resources for optimal performance configure storage and infrastructure resources configure server and service account settings for performance configure Resource Governor for performance
Configure a user database for optimal performance implement database-scoped configuration configure compute resources for scaling configure Intelligent Query Processing (IQP)
Optimize Query Performance (5-10%)
Review query plans determine the appropriate type of execution plan identify problem areas in execution plans extract query plans from the Query Store
Evaluate performance improvements determine the appropriate Dynamic Management Views (DMVs) to gather query performance information identify performance issues using DMVs identify and implement index changes for queries recommend query construct modifications based on resource usage assess the use of hints for query performance
Review database table and index design identify data quality issues with duplication of data identify normal form of database tables assess index design for performance validate data types defined for columns recommend table and index storage including filegroups evaluate table partitioning strategy evaluate the use of compression for tables and indexes
Perform Automation of Tasks (10-15%)
Create scheduled tasks manage schedules for regular maintenance jobs configure multi-server automation configure notifications for task success/failure/non-completion
Evaluate and implement an alert and notification strategy create event notifications based on metrics create event notifications for Azure resources create alerts for server configuration changes create tasks that respond to event notifications
Manage and automate tasks in Azure perform automated deployment methods for resources automate backups automate performance tuning and patching implement policies by using automated evaluation modes
Plan and Implement a High Availability and Disaster Recovery (HADR) Environment (15-20%)
Recommend an HADR strategy for a data platform solution recommend HADR strategy based on RPO/RTO requirements evaluate HADR for hybrid deployments evaluate Azure-specific HADR solutions identify resources for HADR solutions
Test an HADR strategy by using platform, OS, and database tools test HA by using failover test DR by using failover or restore
Perform backup and restore a database by using database tools perform a database backup with options perform a database restore with options perform a database restore to a point in time configure long-term backup retention
Configure HA/DR by using OS, platform, and database tools configure replication create an Always On Availability Group configure auto-failover groups integrate a database into an Availability Group configure quorum options for a Windows Server Failover Cluster configure an Availability Group listener
Perform Administration by Using T-SQL (10-15%)
Examine system health evaluate database health using DMVs evaluate server health using DMVs perform database consistency checks by using DBCC
Monitor database configuration by using T-SQL assess proper database autogrowth configuration report on database free space review database configuration options
Perform backup and restore a database by using T-SQL prepare databases for Always On Availability Groups perform transaction log backup perform restore of user databases perform database backups with options
Manage authentication by using T-SQL manage certificates manage security principals
Manage authorization by using T-SQL configure permissions for users to access database objects configure permissions by using custom roles
The exam is available in the following languages: English, Chinese (Simplified), Japanese, Korean
What you’ll learn Exam DP-300: Administering Relational Databases on Microsoft Azure Plan and implement data platform resources Deploy resources by using manual methods Recommend an appropriate database offering based on specific requirements Configure resources for scale and performance Evaluate a strategy for moving to Azure Implement a migration or upgrade strategy for moving to Azure Implement a Secure Environment Configure database authentication by using platform and database tools Configure database authorization by using platform and database tools Implement security for data at rest Implement security for data in transit Implement compliance controls for sensitive data Monitor and Optimize Operational Resources Monitor activity and performance Implement performance-related maintenance tasks Identify performance-related issues Configure resources for optimal performance Configure a user database for optimal performance Optimize Query Performance Review query plans Evaluate performance improvements Review database table and index design Perform Automation of Tasks Create scheduled tasks Evaluate and implement an alert and notification strategy Manage and automate tasks in Azure Plan and Implement a High Availability and Disaster Recovery (HADR) Environment Recommend an HADR strategy for a data platform solution Test an HADR strategy by using platform, OS, and database tools Perform backup and restore a database by using database tools Configure HA/DR by using OS, platform, and database tools Perform Administration by Using T-SQL Examine system health Monitor database configuration by using T-SQL Perform backup and restore a database by using T-SQL Manage authentication by using T-SQL Manage authorization by using T-SQL Are there any course requirements or prerequisites? Candidates for this exam are database administrators and data management specialists that manage on-premises and cloud relational databases built with Microsoft SQL Server and Microsoft Azure Data Services. The Azure Database Administrator implements and manages the operational aspects of cloud-native and hybrid data platform solutions built on Azure Data Services and SQL Server. The Azure Database Administrator uses a variety of methods and tools to perform day-to-day operations, including applying knowledge of using T-SQL for administrative management purposes.
Who this course is for: Microsoft Azure professionals who want to be Microsoft DP-300 certified
QUESTION 1 What should you use to migrate the PostgreSQL database?
A. Azure Data Box B. AzCopy C. Azure Database Migration Service D. Azure Site Recovery
Answer: C
QUESTION 2 You need to design a data retention solution for the Twitter feed data records. The solution must meet the customer sentiment analytics requirements. Which Azure Storage functionality should you include in the solution?
A. time-based retention B. change feed C. lifecycle management D. soft delete
Answer: C
QUESTION 3 You need to implement the surrogate key for the retail store table. The solution must meet the sales transaction dataset requirements. What should you create?
A. a table that has a FOREIGN KEY constraint B. a table the has an IDENTITY property C. a user-defined SEQUENCE object D. a system-versioned temporal table
Answer: B
QUESTION 4 You have 20 Azure SQL databases provisioned by using the vCore purchasing model. You plan to create an Azure SQL Database elastic pool and add the 20 databases. Which three metrics should you use to size the elastic pool to meet the demands of your workload? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.
A. total size of all the databases B. geo-replication support C. number of concurrently peaking databases * peak CPU utilization per database D. maximum number of concurrent sessions for all the databases E. total number of databases * average CPU utilization per database
Answer: A,C,E
QUESTION 5 You have a Microsoft SQL Server 2019 database named DB1 that uses the following database-level and instance-level features. Clustered columnstore indexes Automatic tuning Change tracking PolyBase You plan to migrate DB1 to an Azure SQL database. What feature should be removed or replaced before DB1 can be migrated?
A. Clustered columnstore indexes B. PolyBase C. Change tracking D. Automatic tuning
Exam overview The CCDE v3.0 Written exam (400-007) will validate that candidates have the expertise to gather and clarify network functional requirements, develop network designs to meet functional specifications, develop implementation plans, convey design decisions and their rationale, and possess expert-level knowledge including:
Business Strategy Design Control, data, and management plane design Network Design Service Design Security Design
Study groups Make new connections and study together by joining the CCDE study group.
Training library A comprehensive technical training library that offers full-length, interactive courses focused on associate and professional certifications, product and technology training with labs, and thousands of reference materials.
Cisco Digital Learning CCDE v3.0 Unified Exam Topics
Exam Description: The exam topics below are general guidelines for the content likely to be included on both the CCDE Written (400-007) and the CCDE Practical exam. The CCDE v3.0 Written exam (400-007) is a two-hour, multiple choice test with 90-110 questions, that focuses on core Enterprise network architectures and technologies. The CCDE v3.0 Practical Exam is an 8-hour scenario-based exam, that focuses on core Enterprise network architectures and technologies, as well as on your selected area of expertise. Both exams validate your knowledge, skills, and abilities throughout the entire network design lifecycle. Both exams are closed book and no outside reference materials are allowed.
Your knowledge, skills, and abilities of recommending, building, validating, optimizing, and adapting technologies/solutions in the context of complex high-level network designs will be tested throughout the exam:
• Recommend technologies or solutions that align with the stated requirements. • Justify why a given decision was made. • Make design choices and fully design solutions that complies with the stated requirements. • Validate existing designs to ensure they are compliant with all requirements, as well as suggesting design changes to accommodate for changed specifications or requirements in the network. • Perform optimizations of existing network designs to fix issues or mitigate risks. • Build high-level implementation plans/steps. • Recommend, build, or justify strategies.
For more information about the exam format and the technologies covered within your exam, please refer to: • CCDE v3.0 Written and Practical Exam Format • Core – technology list • Workforce Mobility – technology list • On-Prem and Cloud Services – technology list • Large Scale Networks – technology list
15% 1.0 Business Strategy Design 1.1 Impact on network design, implementation, and optimization using various customer project management methodologies (for instance waterfall and agile) 1.2 Solutions based on business continuity and operational sustainability (for instance RPO, ROI, CAPEX/OPEX cost analysis, and risk/reward)
25% 2.0 Control, data, management plane and operational design 2.1 End-to-end IP traffic flow in a feature-rich network 2.2 Data, control, and management plane technologies 2.3 Centralized, decentralized, or hybrid control plane 2.4 Automation/orchestration design, integration, and on-going support for networks (for instance interfacing with APIs, model-driven management, controller-based technologies, evolution to CI/CD framework) 2.5 Software-defined architecture and controller-based solution design (SD-WAN, overlay, underlay, and fabric) 30% 3.0 Network Design 3.1 Resilient, scalable, and secure modular networks, covering both traditional and softwaredefined architectures, considering: 3.1.a Technical constraints and requirements 3.1.b Operational constraints and requirements 3.1.c Application behavior and needs 3.1.d Business requirements 3.1.e Implementation plans 3.1.f Migration and transformation
15% 4.0 Service Design 4.1 Resilient, scalable, and secure modular network design based on constraints (for instance technical, operational, application, and business constraints) to support applications on the IP network (for instance voice, video, backups, data center replication, IoT, and storage) 4.2 Cloud/hybrid solutions based on business-critical operations 4.2.a Regulatory compliance 4.2.b Data governance (for instance sovereignty, ownership, and locale) 4.2.c Service placement 4.2.d SaaS, PaaS, and IaaS 4.2.e Cloud connectivity (for instance direct connect, cloud on ramp, MPLS direct connect, and WAN integration) 4.2.f Security
15% 5.0 Security Design 5.1 Network security design and integration 5.1.a Segmentation 5.1.b Network access control 5.1.c Visibility 5.1.d Policy enforcement 5.1.e CIA triad 5.1.f Regulatory compliance (if provided the regulation)
Question 1: A company plans to establish a new network using Cisco Catalyst switches for its multicast applications. What is the disadvantage when two multicast applications are using the multicast IP addresses 234.17.4.5 and 234.145.4.5 inside the same network?
A. The routers doing PIM-SM cannot distinguish between the two multicast applications. B. Only one multicast stream is received at the port where the receivers from both applications are located. C. Both multicast senders will always receive the multicast packets from the other multicast application. D. Multicast packets from both applications are flooded to all Layer 2 ports in a segment where a multicast receiver is located E. Multicast packets from both applications are flooded to ports where one multicast receiver from one application is located.
Correct Answer: E
Question 2: A many-to-many enterprise messaging application is using multicast as a transport mechanism. As part of the network design for this application, which multicast address should be used, according to best practices outlined in RFC 2365?
A. 224.0.0.60 B. 239.128.0.60 C. 239.193.0.60 D. 232.192.0.60
Correct Answer: C
Question 3: A planned EBGP network will use OSPF to reach the EBGP peer addresses. Which of these conditions should be avoided in the design that could otherwise cause the peers to flap continuously?
A. IP addresses used to peer are also being sent via EBGP B. The OSPF area used for peering is nonbackbone (not area 0). C. An ACL blocks TCP port 179 in one direction. D. The routers are peered by using a default route sent by OSPF.
Correct Answer: A
Question 4: In a redesign of a multiple-area network, it is recommended that summarization is to be implemented. For redundancy requirements, summarization is done at multiple locations for each summary. Some customers now complain of higher latency and performance issues for a server hosted in the summarized area. What design issues should be considered when creating the summarization?
A. Summarization prevents the visibility of the metric to the component subnets. B. Summarization creates routing loops. C. Summarization adds CPU overhead on the routers sourcing the summarized advertisement. D. Summarization causes packet loss when RPF is enabled.
Correct Answer: A
Question 5: Which two techniques are used in a network design to slow down the distribution of topology information caused by a rapidly flapping link? (Choose two.)
A. link-state incremental SPF B. Bidirectional Forwarding Detection C. IP event dampening D. link-state partial SPF E. LSA throttling
Implementing and Configuring Cisco Identity Services Engine (SISE) SISE training videos
Exam Description: Implementing and Configuring Cisco Identity Services Engine (SISE 300-715) is a 90- minute exam associated with the CCNP Security Certification. This exam tests a candidate’s knowledge of Cisco Identify Services Engine, including architecture and deployment, policy enforcement, Web Auth and guest services, profiler, BYOD, endpoint compliance, and network access device administration. The course, Implementing and Configuring Cisco Identity Services Engine, helps candidates to prepare for this exam.
1.0 Architecture and Deployment 1.1 Configure personas 1.2 Describe deployment options
2.0 Policy Enforcement 2.1 Configure native AD and LDAP 2.2 Describe identity store options 2.2.a LDAP 2.2.b AD 2.2.c PKI 2.2.d OTP 2.2.e Smart Card 2.2.f Local 2.3 Configure wired/wireless 802.1X network access 2.4 Configure 802.1X phasing deployment 2.4.a Monitor mode 2.4.b Low impact 2.4.c Closed mode 2.5 Configure network access devices 2.6 Implement MAB 2.7 Configure Cisco TrustSec 2.8 Configure policies including authentication and authorization profiles
3.0 Web Auth and Guest Services 3.1 Configure web authentication 3.2 Configure guest access services 3.3 Configure sponsor and guest portals
5.0 BYOD 5.1 Describe Cisco BYOD functionality 5.1.a Use cases and requirements 5.1.b Solution components 5.1.c BYOD flow 5.2 Configure BYOD device on-boarding using internal CA with Cisco switches and Cisco wireless LAN controllers 5.3 Configure certificates for BYOD 5.4 Configure block list/allow list
6.0 Endpoint Compliance 6.1 Describe endpoint compliance, posture services, and client provisioning 6.2 Configure posture conditions and policy, and client provisioning 6.3 Configure the compliance module 6.4 Configure Cisco ISE posture agents and operational modes 6.5 Describe supplicant, supplicant options, authenticator, and server
QUESTION 1 Which personas can a Cisco ISE node assume?
A. policy service, gatekeeping, and monitoring B. administration, monitoring, and gatekeeping C. administration, policy service, and monitoring D. administration, policy service, gatekeeping
Answer: C
QUESTION 2 What occurs when a Cisco ISE distributed deployment has two nodes and the secondary node is deregistered?
A. The secondary node restarts. B. The primary node restarts. C. Both nodes restart. D. The primary node becomes standalone.
Answer: C
QUESTION 3 Which two features are available when the primary admin node is down and the secondary admin node has not been promoted? (Choose two.)
A. new AD user 802.1X authentication B. hotspot C. posture D. guest AUP E. BYOD
Answer: B,D
QUESTION 4 Which supplicant(s) and server(s) are capable of supporting EAP-CHAINING?
A. Cisco Secure Services Client and Cisco Access Control Server B. Cisco AnyConnect NAM and Cisco Identity Service Engine C. Cisco AnyConnect NAM and Cisco Access Control Server D. Windows Native Supplicant and Cisco Identity Service Engine
300-815 CLACCM Implementing Cisco Advanced Call Control and Mobility Services Duration: 90 minutes Languages: English Exam overview Associated certifications: CCNP Collaboration Cisco Certified Specialist – Collaboration Call Control & Mobility Implementation
This exam tests your knowledge of advanced call control and mobility services, including: Signaling and media protocols CME/SRST gateway technologies Cisco Unified Board Element Call control and dial planning Cisco Unified CM Call Control and Mobility
Implementing Cisco Advanced Call Control and Mobility Services v1.0 (300-815) Exam Description: Implementing Cisco Advanced Call Control and Mobility Services v1.0 (CLACCM 300-815) is a 90-minute exam associated with the CCNP Collaboration Certification. This exam tests a candidate’s knowledge of advanced call control and mobility services, including signaling and media protocols, CME/SRST gateway technologies, Cisco Unified Board Element, call control and dial planning, Cisco Unified CM Call Control, and mobility. The course, Implementing Cisco Advanced Call Control and Mobility Services, helps candidates to prepare for this exam.
The following topics are general guidelines for the content likely to be included on the exam. However, other related topics may also appear on any specific delivery of the exam. To better reflect the contents of the exam and for clarity purposes, the guidelines below may change at any time without notice.
20% 1.0 Signaling and Media Protocols 1.1 Troubleshoot these elements of a SIP conversation 1.1.a Early media 1.1.b PRACK 1.1.c Mid-call signaling (hold/resume, call transfer, conferencing) 1.1.d Session timers 1.1.e UPDATE 1.2 Troubleshoot these H.323 protocol elements 1.2.a DTMF 1.2.b Call set up and tear down 1.3 Troubleshoot media establishment 10% 2.0 CME/SRST Gateway Technologies 2.1 Configure Cisco Unified Communications Manager Express for SIP phone registration 2.2 Configure Cisco Unified CME dial plans 2.3 Implement toll fraud prevention 2.4 Configure these advanced Cisco Unified CME features 2.4.a Hunt groups 2.4.b Call park 2.4.c Paging 2.5 Configure SIP SRST gateway
15% 3.0 Cisco Unified Border Element 3.1 Configure these Cisco Unified Border Element dial plan elements 3.1.a DTMF 3.1.b Voice translation rules and profiles 3.1.c Codec preference list 3.1.d Dial peers 3.1.e Header and SDP manipulation with SIP profiles 3.1.f Signaling and media bindings 3.2 Troubleshoot these Cisco Unified Border Element dial plan elements 3.2.a DTMF 3.2.b Voice translation rules and profiles 3.2.c Codec preference list 3.2.d Dial peers 3.2.e Header and SDP manipulation with SIP profiles 3.2.f Signaling and media bindings 25% 4.0 Call Control and Dial Planning 4.1 Configure these globalized call routing elements in Cisco Unified Communications Manager 4.1.a Translation patterns 4.1.b Route patterns 4.1.c SIP route patterns 4.1.d Transformation patterns 4.1.e Standard local route group 4.1.f TEHO 4.1.g SIP trunking 4.2 Troubleshoot these globalized call routing elements in Cisco Unified Communications Manager 4.2.a Translation patterns 4.2.b Route patterns 4.2.c SIP route patterns 4.2.d Transformation patterns 4.2.e Standard local route group 4.2.f TEHO 4.2.g SIP trunking 20% 5.0 Cisco Unified CM Call Control Features 5.1 Troubleshoot Call Admission Control (exclude RSVP) 5.2 Configure ILS, URI synchronization, and GDPR 5.3 Configure hunt groups 5.4 Configure call queuing 5.5 Configure time of day routing 5.6 Configure supplementary functions 5.6.a Call park 5.6.b Meet-me 5.6.c Call pick-up 10% 6.0 Mobility 6.1 Configure Cisco Unified Communications Manager Mobility 6.1.a Unified Mobility 6.1.b Extension Mobility 6.1.c Device Mobility 6.2 Troubleshoot Cisco Unified Communications Manager Mobility 6.2.a Unified Mobility 6.2.b Extension Mobility 6.2.c Device Mobility
QUESTION 1 An administrator is troubleshooting a one-way audio issue for a call that uses H.323 protocol in slow-start mode. The administrator requests that the IP and port information of the Real-Time Transport Protocol traffic that had the one-way audio call is provided. The H.225 and H.245 messages for one of the one-way audio calls are gathered and the call flow has not invoked any media resources. Where is the RTP IP and port information for both sides found?
A. H.245 Terminal Capability Set B. H.245 Open Logical Channel C. H.225 Connect D. H.245 Open Logical Channel Ack
Answer: B
QUESTION 2 Which two extended capabilities must be configured on dial peers for fast start-to-early media scenarios (H.323 to SIP interworking)? (Choose two.)
A. DTMF B. BFCP C. VIDEO D. FAX E. AUDIO
Answer: A,B
QUESTION 3 When an administrator troubleshoots H.323 call setup, which message gives an alert that the called party is being notified about the call?
A. ALERTING B. PROCEEDING C. CONNECT D. RINGING
Answer: C QUESTION 4 End users at a new site report being unable to hear the remote party when calling or being called by users at headquarters. Calls to and from the PSTN work as expected. To investigate the SIP signaling to troubleshoot the problem, which field can provide a hint for troubleshooting?
A. Contact: header of the 200 OK response B. Allow: header if the 200 OK response C. o= line of SDP content D. c= line of SDP content
Answer: C
QUESTION 5 Why would RTP traffic that is sent from the originating endpoint fail to be received on the far endpoint?
A. The far end connection data (c=) in the SDP was overwritten by deep packet inspection in the call signaling path. B. Cisco UCM invoked media termination point resources. C. The RTP traffic is arriving beyond the jitter buffer on the receiving end. D. A firewall in the media path is blocking TCP ports 16384-32768.
Exam Codes CompTIA A+ 220-1001 (Core 1) and 220-1002 (Core 2) Candidates must complete both 1101 and 1102 to earn certification. Exams cannot be combined across the series.
CompTIA A+ certified professionals are proven problem solvers. They support today’s core technologies from security to networking to virtualization and more. CompTIA A+ is the industry standard for launching IT careers into today’s digital world.
CompTIA A+ is the only industry recognized credential with performance testing to prove pros can think on their feet to perform critical IT support tasks. It is trusted by employers around the world to identify the go-to person in end point management & technical support roles. CompTIA A+ appears in more tech support job listings than any other IT credential.
The CompTIA A+ Core Series requires candidates to pass two exams: Core 1 (220-1101) and Core 2 (220-1102) covering the following new content, emphasizing the technologies and skills IT pros need to support a hybrid workforce.
Increased reliance on SaaS applications for remote work More on troubleshooting and how to remotely diagnose and correct common software, hardware, or connectivity problems Changing core technologies from cloud virtualization and IoT device security to data management and scripting Multiple operating systems now encountered by technicians on a regular basis, including the major systems, their use cases, and how to keep them running properly Reflects the changing nature of the job role, where many tasks are sent to specialized providers as certified personnel need to assess whether it’s best to fix something on site, or to save time and money by sending proprietary technologies directly to vendors
9 skills that you master and validate with CompTIA A+
HARDWARE Identifying, using, and connecting hardware components and devices, including the broad knowledge about different devices that is now necessary to support the remote workforce
OPERATING SYSTEMS Install and support Windows OS including command line & client support. System configuration imaging and troubleshooting for Mac OS, Chrome OS, Android and Linux OS.
SOFTWARE TROUBLESHOOTING Troubleshoot PC and mobile device issues including common OS, malware and security issues. NETWORKING Explain types of networks and connections including TCP/IP, WIFI and SOHO
TROUBLESHOOTING Troubleshoot real-world device and network issues quickly and efficiently
SECURITY Identify and protect against security vulnerabilities for devices and their network connections
MOBILE DEVICES Install & configure laptops and other mobile devices and support applications to ensure connectivity for end- users
VIRTUALIZATION & CLOUD COMPUTING Compare & contrast cloud computing concepts & set up client-side virtualization
OPERATIONAL PROCEDURES Follow best practices for safety, environmental impacts, and communication and professionalism
Jobs that use A+ Help Desk Tech
Desktop Support Specialist Field Service Technician
Help Desk Technician Associate Network Engineer System Support Technician Junior Systems Administrator
All questions are based on the Exam Objectives for the 220-1102 exam for all 4 domains of the exam, so you can take the actual CompTIA A+ (220-1102) Core 2 exam with confidence, and PASS it!
In this course, I fully prepare you for what it is like to take the CompTIA A+ (220-1102) certification exam. These 6 full-length practice exams with 90 questions are timed for 90 minutes to help you pace yourself for the real exam. I have carefully hand-crafted each question to put you to the test and prepare you to pass the certification exam with confidence.
You won’t be hoping you are ready; you will know you are ready to sit for and pass the exam. After practicing these tests and scoring an 90% or higher on them, you will be ready to PASS on the first attempt and avoid costly re-schedule fees, saving you time and money.
You will receive your total final score, along with feedback on each and every question, telling you exactly why each answer is correct and which domain (and objective) you need to study more to pinpoint the areas in which you need to improve and perform some additional study.
We cover all four domains of the 220-1102 exam, including: 1.0 Operating Systems (31%) 2.0 Security (25%) 3.0 Software Troubleshooting (22%) 4.0 Operational Procedures (22%)
Our CompTIA A+ (220-1102) Core 2 practice exams provide you with realistic test questions and provide you with interactive, question-level feedback.
I have some of the highest-rated CompTIA training courses on the Udemy platform. I invite you to visit my instructor profile to learn more about me, the certifications that I hold, and read my courses’ reviews. This course is consistently updated to ensure it stays current and up-to-date with the CompTIA A+ exam’s latest release and provides a 30-day money-back guarantee, no questions asked!
What you’ll learn How to pass the CompTIA A+ (220-1102) Core 2 exam on your first attempt What your weak areas are in the CompTIA A+ curriculum so you can restudy those areas Install, configure, and maintain computer equipment, mobile devices, and software for end users Service components based on customer requirements Understand networking basics and apply basic cybersecurity methods to mitigate threats Properly and safely diagnose, resolve, and document common hardware and software issues Apply troubleshooting skills and provide customer support using appropriate communication skills Understand the basics of scripting, cloud technologies, virtualization, and multi-OS deployments in corporate environments
Are there any course requirements or prerequisites? CompTIA recommends having 9-12 months of on-the-job experience before taking the CompTIA A+ (220-1102) Core 2 exam (Recommended but not required) Have read a book, watched a video series, or otherwise started studying for the CompTIA A+ (220-1102) Core 2 exam Working knowledge of computers and small networks
Who this course is for: Anyone looking to take and pass the CompTIA A+ (220-1102) Core 2 certification exam Anyone who needs to become a better test taker before attempting the CompTIA A+ (220-1102) Core 2 certification exam
Question 1: You are going to replace a power supply in a desktop computer. Which of the following actions should you take FIRST?
A. Use a grounding probe to discharge the power supply B. Remove any jewelry you are wearing C. Dispose of the old power supply D. Verify proper cable management is being used
Correct Answer : B
Explanation OBJ-4.4: Before working on a computer or server, you should always remove your jewelry. Jewelry such as bracelets and necklaces can often dangle and come into contact with sensitive components or electrical connections that can cause damage to the components or injure you. Therefore, all jewelry should be removed before working on an electrical system or computer to reduce the risk of shock. A grounding probe is not required to discharge the power supply since the technician should never be opening up the case of a power supply. The old power supply should be safely disposed of after it is removed, but it should not be removed until you have removed your jewelry. Proper cable management is important when installing a power supply, but again this should only occur after removing your jewelry. Question 2: What umask should be set for a directory to have 700 as its octal permissions?
A. r–r–r– B. rwx—rwx C. rwx—— D. Rwxrwxrwx
Correct Answer : C
Explanation OBJ-2.6: RWX is 7 and — is 0. In Linux, you can convert letter permissions to octal by giving 4 for each R, 2 for each W, and 1 for each X. R is for read-only, W is for write, and X is for execute. The permissions strings are written to represent the owner’s permissions, the group’s permissions, and the other user’s permissions. Question 3: Which of the following physical security controls would be the most effective in preventing an attacker from driving a vehicle through the glass doors at the front of the organization’s headquarters?
A. Bollards B. Intrusion alarm C. Access control vestibule D. Security guards
Correct Answer :
Explanation OBJ-2.1: Bollards are a physical security control that is designed to prevent a vehicle-ramming attack. Bollards are typically designed as sturdy, short, vertical posts. Some organizations have installed more decorative bollards created out of cement and are large enough to plant flowers or trees inside. Access control vestibules are designed to prevent individuals from tailgating into the building. Security guards and intrusion alarms could detect this from occurring but not truly prevent them.
Question 4: Which of the following tools should you utilize to ensure you don’t damage a laptop’s SSD while replacing it?
A. Air filter mask B. Antistatic bag C. ESD strap D. Latex gloves
Correct Answer : C
Explanation OBJ-4.4: The key to answering this question is the word “while” in the sentence. Since you need to protect the SSD “while” you are replacing it, you must ensure you wear an ESD strap. An ESD strap is placed around your wrist and dissipates any static electricity from your body to protect sensitive hardware such as processors, memory, expansion cards, and SSDs during installation. An electrostatic discharge (ESD) is the release of a charge from metal or plastic surfaces that occurs when a potential difference is formed between the charged object and an oppositely charged conductive object. This electrical discharge can damage silicon chips and computer components if they are exposed to it. An antistatic bag is a packaging material containing anti-ESD shielding or dissipative materials to protect components from ESD damage. An antistatic bag is a packaging material containing anti-ESD shielding or dissipative materials to protect components from ESD damage. An air filter mask is a mask manufactured from polyester sheets that cover your nose and mouth to prevent the dust from being breathed in by a technician. Latex gloves are hand coverings to protect the technician when they are working with toner or other chemicals. Question 5: You are working on upgrading the memory of a laptop. After removing the old memory chips from the laptop, where should you safely store them until you are ready to reuse them in another laptop?
A. Ziplock bags B. Manila envelopes C. Cardboard box D. Antistatic bag
Correct Answer : D
Explanation OBJ-4.4: To properly handle and store sensitive components, like a memory chip, you should use an ESD strap and place the components in an antistatic bag. An antistatic bag is a bag used for storing electronic components, which are prone to damage caused by electrostatic discharge (ESD). These bags are usually plastic polyethylene terephthalate (PET) and have a distinctive color (silvery for metalized film, pink or black for polyethylene).
This exam tests your skills with the WLAN design, deployment, and troubleshooting of Aruba Mobile First Network Solutions in complex highly available campus and branch environments. It also tests your ability to configure specialized applications, management, and security requirements for a WLAN such as UCC Voice and advanced security features.
You need an HPE Learner ID and a Pearson VUE login and password.
Register for this Exam No reference material is allowed at the testing site. This exam may contain beta test items for experimental purposes.
During the exam, you can make comments about the exam items. We welcome these comments as part of our continuous improvement process.
Exam ID HPE6-A71 Exam type Proctored Exam duration 1 hour 30 minutes Exam length 60 questions Passing score 65% Delivery languages Latin American Spanish, Japanese, English Supporting resources Implementing Aruba Mobility, Rev. 20.11
Additional study materials Aruba Certified Mobility Professional Study Guide
Ideal candidateTypical candidates for this exam are networking IT professionals with a minimum of two years of advanced-level implementation experience with Aruba WLAN solutions and a minimum of three years of experience with wired LAN infrastructure and switching and routing technologies.
Exam contents This exam has 60 questions. Advice to help you take this exam
Complete the training and review all course materials and documents before you take the exam. Use HPE Press study guides and additional reference materials; study guides, practice tests, and HPE books. Exam items are based on expected knowledge acquired from job experience, an expected level of industry standard knowledge, or other prerequisites (events, supplemental materials, etc.). Successful completion of the course or study materials alone, does not ensure you will pass the exam.
Exam policies Click here to view exam security and retake policies.
This exam validates that you can:
20% Integrate and implement Aruba Mobile First architecture components and explain their uses.
Integrate components of the Aruba Mobile First Architecture. Differentiate between standalone mode and Master Controller Mode (MCM) features and recommend use cases. Differentiate the use of packet forwarding modes (tunnel, decrypt-tunnel, split-tunnel, and bridge). Differentiate between redundancy methods, and describe the benefits of L2 and L3 clustering. Explain Remote Access architectures and how to integrate the architectures. Describe and differentiate advanced licensing features.
20% Configure and validate Aruba WLAN secure employee and guest solutions. • Configure Remote Access with Aruba Solutions such as RAP and VIA. • Configure and deploy redundant controller solutions based upon a given design. • Configure a Mesh WLAN.
38% Implement advanced services and security. • Enable multicast DNS features to support discovery across VLAN boundaries. • Configure role derivation, and explain and implement advanced role features. • Configure an AAA server profile for a user or administrative access. • Implement Mobility Infrastructure hardening features. • Explain Clarity features and functions. • Implement Voice WLAN based upon a given design. • Configure primary zones and data zones to support MultiZone AP. • Implement mobility (roaming) in an Aruba wireless environment. • Implement tunneled node to secure ArubaOS switches.
10% Manage and monitor Aruba solutions. • Use AirWave to monitor an Aruba Mobility Master and Mobility Controller. • Perform maintenance upgrades and operational maintenance.
12% Troubleshoot Aruba WLAN solutions. • Troubleshoot controller communication. • Troubleshoot the WLAN. • Troubleshoot Remote Access. • Troubleshoot issues related to services and security. • Troubleshoot role-based access, per-port based security and Airmatch.
QUESTION 1 An administrator deploys an AP at a branch office. The branch office has a private WAN circuit that provides connectivity to a corporate office controller. An Ethernet port on the AP is connected to a network storage device that contains sensitive information. The administrator is concerned about sending this traffic in cleartext across the private WAN circuit. What can the administrator do to prevent this problem?
A. Enable IPSec encryption on the AP’s wired ports. B. Convert the campus AP into a RAP. C. Redirect the wired port traffic to an AP-to-controller GRE tunnel. D. Enable AP encryption for wired ports.
Correct Answer: A
QUESTION 2 An administrator needs to modify a VAP used for a branch office RAP. The VAP’s operating mode is currently defined as backup and uses tunnel mode forwarding. The administrator wants to implement split-tunnel forwarding mode in the VAP.
Which WLAN operating mode must the administrator define for the VAP before the tunnel forwarding mode can be changed to split-tunnel?
A. Trusted B. Always C. Persistent D. Standard Correct Answer: D
QUESTION 3 An administrator creates service-based policies for AirGroup on the Mobility Master (MM). The administrator can define location-based policy limits based on which information?
A. controller names, controller groups, and controller Fully Qualified Domain Names (FQDNs) B. AP names, AP groups, controller names, and controller groups C. AP Fully Qualified Location Names (FQLNs) and controller Fully Qualified Domain Names (FQDNs) D. AP names, AP groups, and AP Fully Qualified Location Names (FQLNs)
Correct Answer: D
QUESTION 4 An administrator supports a RAP at a branch office. A user’s device that is attached to the Ethernet port is assigned an 802.1X AAA policy and is configured for tunneled node. How is the user’s traffic transmitted to the corporate office?
A. It is not encapsulated by GRE and not protected with IPSec. B. It is encapsulated by GRE and protected with IPSec. C. It is not encapsulated by GRE but is protected with IPSec. D. It is encapsulated by GRE and not protected with IPSec.
CompTIA A+ is the industry standard for establishing a career in IT.
CompTIA A+ certified professionals are proven problem solvers. They support today’s core technologies from security to networking to virtualization and more. CompTIA A+ is the industry standard for launching IT careers into today’s digital world.
CompTIA A+ is the only industry recognized credential with performance testing to prove pros can think on their feet to perform critical IT support tasks. It is trusted by employers around the world to identify the go-to person in end point management & technical support roles. CompTIA A+ appears in more tech support job listings than any other IT credential.
The CompTIA A+ Core Series requires candidates to pass two exams: Core 1 (220-1101) and Core 2 (220-1102) covering the following new content, emphasizing the technologies and skills IT pros need to support a hybrid workforce.
Increased reliance on SaaS applications for remote work More on troubleshooting and how to remotely diagnose and correct common software, hardware, or connectivity problems Changing core technologies from cloud virtualization and IoT device security to data management and scripting Multiple operating systems now encountered by technicians on a regular basis, including the major systems, their use cases, and how to keep them running properly Reflects the changing nature of the job role, where many tasks are sent to specialized providers as certified personnel need to assess whether it’s best to fix something on site, or to save time and money by sending proprietary technologies directly to vendors
9 skills that you master and validate with CompTIA A+ artboard-6
HARDWARE Identifying, using, and connecting hardware components and devices, including the broad knowledge about different devices that is now necessary to support the remote workforce artboard-7
OPERATING SYSTEMS Install and support Windows OS including command line & client support. System configuration imaging and troubleshooting for Mac OS, Chrome OS, Android and Linux OS. artboard-5
SOFTWARE TROUBLESHOOTING Troubleshoot PC and mobile device issues including common OS, malware and security issues. artboard-8
NETWORKING Explain types of networks and connections including TCP/IP, WIFI and SOHO artboard-19
TROUBLESHOOTING Troubleshoot real-world device and network issues quickly and efficiently artboard-3
SECURITY Identify and protect against security vulnerabilities for devices and their network connections artboard-11
MOBILE DEVICES Install & configure laptops and other mobile devices and support applications to ensure connectivity for end- users artboard-13
VIRTUALIZATION & CLOUD COMPUTING Compare & contrast cloud computing concepts & set up client-side virtualization artboard-9
OPERATIONAL PROCEDURES Follow best practices for safety, environmental impacts, and communication and professionalism
Jobs that use A+
Help Desk Tech Desktop Support Specialist Field Service Technician Help Desk Technician Associate Network Engineer System Support Technician Junior Systems Administrator
CompTIA A+ 220-1101 (Core 1) and 220-1102 (Core 2)
Candidates must complete both 1101 and 1102 to earn certification. Exams cannot be combined across the series. Launch Date : April 2022 Exam Description : CompTIA A+ 220-1101 covers mobile devices, networking technology, hardware, virtualization and cloud computing. Number of Questions : Maximum of 90 questions per exam Length of Test : 90 Minutes per exam Languages : English at launch. German, Japanese, Portuguese, Thai and Spanish English at launch. German, Japanese, Portuguese, Thai and Spanish Retirement : TBD – Usually three years after launch Testing Provider: Pearson VUE: Testing Centers : Online Testing
We cover all five domains of the 220-1101 exam, including: 1.0 Mobile Devices (15%) 2.0 Networking (20%) 3.0 Hardware (25%) 4.0 Virtualization and Cloud Computing (11%) 5.0 Hardware and Network Troubleshooting (29%)
Question 1: A customer called the service desk and complained that they could not reach the internet on their computer. You ask the customer to open their command prompt, type in ipconfig, and read you the IP address. The customer reads the IP as 169.254.12.45. What is the root cause of the customer’s issue based on what you know so far?
A. Their workstation cannot reach the DNS server B. Their workstation cannot reach the default gateway C. Their workstation cannot reach the web server D. Their workstation cannot reach the DHCP server
Correct Answer: D
Explanation OBJ-5.7: Since the customer’s IP address is 169.254.12.45, it is an APIPA address. Since the workstation has an APIPA address, it means the DHCP server was unreachable. Automatic Private IP Addressing (APIPA) is a feature of Windows-based operating systems that enables a computer to automatically assign itself an IP address when there is no Dynamic Host Configuration Protocol (DHCP) server available to perform that function. APIPA serves as a DHCP server failover mechanism and makes it easier to configure and support small local area networks (LANs). If no DHCP server is currently available, either because the server is temporarily down or because none exists on the network, the computer selects an IP address from a range of addresses (from 169.254.0.0 – 169.254.255.255) reserved for that purpose.
Question 2: Your company is currently using a 5 GHz wireless security system, so your boss has asked you to install a 2.4 GHz wireless network to use for the company’s computer network to prevent interference. Which of the following can NOT be installed to provide a 2.4 GHz wireless network?
A. 802.11g B. 802.11b C. 802.11ac D. 802.11n
Correct Answer: C
Explanation OBJ-2.3: Wireless networks are configured to use either 2.4 GHz or 5.0 GHz frequencies, depending on the network type. 802.11a and 802.11ac both utilize a 5.0 GHz frequency for their communications. 802.11b and 802.11g both utilize a 2.4 GHz frequency for their communications. 802.11n and 802.11ax utilize either 2.4 GHz, 5.0 GHz, or both, depending on the Wi-Fi device’s manufacturer. The 802.11b (Wireless B) standard utilizes a 2.4 GHz frequency to provide wireless networking at speeds up to 11 Mbps. The 802.11g (Wireless G) standard utilizes a 2.4 GHz frequency to provide wireless networking at speeds up to 54 Mbps. The 802.11n (Wireless N) standard utilizes a 2.4 GHz frequency to provide wireless networking at speeds up to 108 Mbps or a 5.0 GHz frequency to provide wireless networking at speeds up to 600 Mbps. Wireless N supports the use of multiple-input-multiple-output (MIMO) technology to use multiple antennas to transmit and receive data at higher speeds. Wireless N supports channel bonding by combining two 20 MHz channels into a single 40 MHz channel to provide additional bandwidth. The 802.11ac (Wireless AC or Wi-Fi 5) standard utilizes a 5 GHz frequency to provide wireless networking at theoretical speeds up to 5.5 Gbps. Wireless AC uses channel bonding to create a single channel of up to 160 MHz to provide additional bandwidth. Wireless AC uses multi-user multiple-input-
Question 3: Which of the following resources is used by virtual machines to communicate with other virtual machines on the same network but prevents them from communicating with resources on the internet?
A. DNS B. Virtual internal network C. Virtual external network D. Network address translation
Correct Answer: B
Explanation OBJ-4.2: Most virtual machines running on a workstation will have their own virtual internal network to communicate within the virtual environment while preventing them from communicating with the outside world. You may also configure a shared network address for the virtual machine to have the same IP address as the physical host that it is running on. This usually relies on network address translation to communicate from the virtual environment (inside) to the physical world (outside/internet). If you are communicating internally in the virtual network, there is no need for DNS or an external network.
Question 4: A technician needs to upgrade the RAM in a database server. The server’s memory must support maintaining the highest levels of integrity. Which of the following type of RAM should the technician install?
A. ECC B. Non-Parity C. SODIMM D. VRAM
Correct Answer: A
Explanation OBJ-3.2: Error checking and correcting or error correcting code (ECC) is a type of system memory that has built-in error correction security. ECC is more expensive than normal memory and requires support from the motherboard. ECC is commonly used in production servers and not in standard desktops or laptops. Non-parity memory is a type of system memory that does not perform error checking except when conducting the initial startup memory count. VRAM (video RAM) refers to any type of random access memory (RAM) specifically used to store image data for a computer display. A small outline dual inline memory module (SODIMM) can be purchased in various types and sizes to fit any laptop, router, or other small form factor computing device.
Question 5: You just replaced a failed motherboard in a corporate workstation and returned it to service. About an hour later, the customer complained that the workstation is randomly shutting down and rebooting itself. You suspect the memory module may be corrupt, and you perform a memory test, but the memory passes all of your tests. Which of the following should you attempt NEXT in troubleshooting this problem?
A. Remove and reseat the RAM B. Verify the case fans are clean and properly connected C. Reset the BIOS D. Replace the RAM with ECC modules
Correct Answer: B
Explanation OBJ-5.2: If a workstation overheats, it will shut down or reboot itself to protect the processor. This can occur if the case fans are clogged with dust or become unplugged. By checking and reconnecting the case fans, the technician can rule out an overheating issue causing this problem. Since the memory was already tested successfully, it does not need to be removed and reseated, or replaced with ECC modules. The BIOS is not the issue since the computer booted up into Windows successfully before rebooting.
Introduction The AWS Certified Solutions Architect – Associate (SAA-C03) exam is intended
for individuals who perform in a solutions architect role. The exam validates a
candidate’s ability to use AWS technologies to design solutions based on the AWS
Well-Architected Framework.
The exam also validates a candidate’s ability to complete the following
tasks: • Design solutions that incorporate AWS services to meet current business
requirements and future projected needs
• Design architectures that are secure, resilient, high-performing, and
cost-optimized
• Review existing solutions and determine improvements
Target candidate description The target candidate should have at least 1 year of hands-on experience
designing cloud solutions that use AWS services.
For a detailed list of specific tools and technologies that might be covered on
the exam, as well as lists of in-scope and out-of-scope AWS services, refer to
the Appendix.
Exam content Response types There are two types of questions on the exam:
• Multiple choice: Has one correct response and three incorrect responses (distractors)
• Multiple response: Has two or more correct responses out of five or more
response options
Select one or more responses that best complete the statement or answer the
question. Distractors, or incorrect answers, are response options that a
candidate with incomplete knowledge or skill might choose. Distractors are
generally plausible responses that match the content area.
Unanswered questions are scored as incorrect; there is no penalty for guessing.
The exam includes 50 questions that will affect your score.
Unscored content The exam includes 15 unscored questions that do not affect your score. AWS
collects information about candidate performance on these unscored questions to
evaluate these questions for future use as scored questions. These unscored
questions are not identified on the exam.
Exam results The AWS Certified Solutions Architect – Associate exam is a pass or fail
exam. The exam is scored against a minimum standard established by AWS
professionals who follow certification industry best practices and guidelines.
Your results for the exam are reported as a scaled score of 100–1,000. The
minimum passing score is 720. Your score shows how you performed on the exam as
a whole and whether or not you passed. Scaled scoring models help equate scores
across multiple exam forms that might have slightly different difficulty levels.
Your score report could contain a table of classifications of your performance
at each section level. This information provides general feedback about your
exam performance. The exam uses a compensatory scoring model, which means that
you do not need to achieve a passing score in each section. You need to pass
only the overall exam.
Each section of the exam has a specific weighting, so some sections have more
questions than other sections have. The table contains general information that
highlights your strengths and weaknesses. Use caution when interpreting
section-level feedback. Candidates who pass the exam will not receive this
additional information.
Content outline
This exam guide includes weightings, test domains, and task statements for the
exam. It is not a comprehensive listing of the content on the exam. However,
additional context for each of the task statements is available to help guide
your preparation for the exam. The following table lists the main content
domains and their weightings. The table precedes the complete exam content
outline, which includes the additional context. The percentage in each domain
represents only scored content.
Knowledge of: • Access controls and management across multiple accounts
• AWS federated access and identity services (for example, AWS Identity and
Access Management [IAM], AWS Single Sign-On [AWS SSO])
• AWS global infrastructure (for example, Availability Zones, AWS Regions)
• AWS security best practices (for example, the principle of least privilege)
• The AWS shared responsibility model
Skills in: • Applying AWS security best practices to IAM users and root users (for
example, multi-factor authentication [MFA])
• Designing a flexible authorization model that includes IAM users, groups,
roles, and policies
• Designing a role-based access control strategy (for example, AWS Security
Token Service [AWS STS], role switching, cross-account access)
• Designing a security strategy for multiple AWS accounts (for example, AWS
Control Tower, service control policies [SCPs])
• Determining the appropriate use of resource policies for AWS services
• Determining when to federate a directory service with IAM roles
Task Statement 2: Design secure workloads and applications.
Knowledge of:
• Application configuration and credentials security
• AWS service endpoints
• Control ports, protocols, and network traffic on AWS
• Secure application access
• Security services with appropriate use cases (for example, Amazon Cognito,
Amazon GuardDuty, Amazon Macie)
• Threat vectors external to AWS (for example, DDoS, SQL injection)
Skills in: • Designing VPC architectures with security components (for example,
security groups, route tables, network ACLs, NAT gateways)
• Determining network segmentation strategies (for example, using public subnets
and private subnets)
• Integrating AWS services to secure applications (for example, AWS Shield, AWS
WAF, AWS SSO, AWS Secrets Manager)
• Securing external network connections to and from the AWS Cloud (for example,
VPN, AWS Direct Connect)
Task Statement 3: Determine appropriate data security controls.
Knowledge of: • Data access and governance
• Data recovery
• Data retention and classification
• Encryption and appropriate key management
Skills in: • Aligning AWS technologies to meet compliance requirements
• Encrypting data at rest (for example, AWS Key Management Service [AWS KMS])
• Encrypting data in transit (for example, AWS Certificate Manager [ACM] using
TLS)
• Implementing access policies for encryption keys
• Implementing data backups and replications
• Implementing policies for data access, lifecycle, and protection
• Rotating encryption keys and renewing certificates
Domain 2: Design Resilient Architectures
Task Statement 1: Design scalable and loosely coupled architectures.
Knowledge of: • API creation and management (for example, Amazon API Gateway, REST API)
• AWS managed services with appropriate use cases (for example, AWS Transfer
Family, Amazon Simple Queue Service [Amazon SQS], Secrets Manager)
• Caching strategies
• Design principles for microservices (for example, stateless workloads compared
with stateful workloads)
• Event-driven architectures
• Horizontal scaling and vertical scaling
• How to appropriately use edge accelerators (for example, content delivery
network [CDN])
• How to migrate applications into containers
• Load balancing concepts (for example, Application Load Balancer)
• Multi-tier architectures
• Queuing and messaging concepts (for example, publish/subscribe)
• Serverless technologies and patterns (for example, AWS Fargate, AWS Lambda)
• Storage types with associated characteristics (for example, object, file,
block)
• The orchestration of containers (for example, Amazon Elastic Container Service
[Amazon ECS], Amazon Elastic Kubernetes Service [Amazon EKS])
• When to use read replicas
• Workflow orchestration (for example, AWS Step Functions)
Skills in: • Designing event-driven, microservice, and/or multi-tier architectures
based on requirements
• Determining scaling strategies for components used in an architecture design
• Determining the AWS services required to achieve loose coupling based on
requirements
• Determining when to use containers
• Determining when to use serverless technologies and patterns
• Recommending appropriate compute, storage, networking, and database
technologies based on requirements
• Using purpose-built AWS services for workloads
Task Statement 2: Design highly available and/or fault-tolerant architectures.
Knowledge of:
• AWS global infrastructure (for example, Availability Zones, AWS Regions,
Amazon Route 53)
• AWS managed services with appropriate use cases (for example, Amazon
Comprehend, Amazon Polly)
• Basic networking concepts (for example, route tables)
• Disaster recovery (DR) strategies (for example, backup and restore, pilot
light, warm standby, active-active failover, recovery point objective [RPO],
recovery time objective [RTO])
• Distributed design patterns
• Failover strategies
• Immutable infrastructure
• Load balancing concepts (for example, Application Load Balancer)
• Proxy concepts (for example, Amazon RDS Proxy)
• Service quotas and throttling (for example, how to configure the service
quotas for a workload in a standby environment)
• Storage options and characteristics (for example, durability, replication)
• Workload visibility (for example, AWS X-Ray)
Skills in:
• Determining automation strategies to ensure infrastructure integrity
• Determining the AWS services required to provide a highly available and/or
fault-tolerant architecture across AWS Regions or Availability Zones
• Identifying metrics based on business requirements to deliver a highly
available solution
• Implementing designs to mitigate single points of failure
• Implementing strategies to ensure the durability and availability of data (for
example, backups)
• Selecting an appropriate DR strategy to meet business requirements
• Using AWS services that improve the reliability of legacy applications and
applications not built for the cloud (for example, when application changes are
not possible)
• Using purpose-built AWS services for workloads
Domain 3: Design High-Performing Architectures
Task Statement 1: Determine high-performing and/or scalable storage solutions.
Knowledge of: • Hybrid storage solutions to meet business requirements
• Storage services with appropriate use cases (for example, Amazon S3, Amazon
Elastic File System [Amazon EFS], Amazon Elastic Block Store [Amazon EBS])
• Storage types with associated characteristics (for example, object, file,
block)
Skills in: • Determining storage services and configurations that meet performance
demands
• Determining storage services that can scale to accommodate future needs
Task Statement 2: Design high-performing and elastic compute solutions.
Knowledge of:
• AWS compute services with appropriate use cases (for example, AWS Batch,
Amazon EMR, Fargate)
• Distributed computing concepts supported by AWS global infrastructure and edge
services
• Queuing and messaging concepts (for example, publish/subscribe)
• Scalability capabilities with appropriate use cases (for example, Amazon EC2
Auto Scaling, AWS Auto Scaling)
• Serverless technologies and patterns (for example, Lambda, Fargate)
• The orchestration of containers (for example, Amazon ECS, Amazon EKS)
Skills in:
• Decoupling workloads so that components can scale independently
• Identifying metrics and conditions to perform scaling actions
• Selecting the appropriate compute options and features (for example, EC2
instance types) to meet business requirements
• Selecting the appropriate resource type and size (for example, the amount of
Lambda memory) to meet business requirements
Task Statement 3: Determine high-performing database solutions.
Knowledge of: • AWS global infrastructure (for example, Availability Zones, AWS Regions)
• Caching strategies and services (for example, Amazon ElastiCache)
• Data access patterns (for example, read-intensive compared with
write-intensive)
• Database capacity planning (for example, capacity units, instance types,
Provisioned IOPS)
• Database connections and proxies
• Database engines with appropriate use cases (for example, heterogeneous
migrations, homogeneous migrations)
• Database replication (for example, read replicas)
• Database types and services (for example, serverless, relational compared with
non-relational, in-memory)
Skills in: • Configuring read replicas to meet business requirements
• Designing database architectures
• Determining an appropriate database engine (for example, MySQL compared with
PostgreSQL)
• Determining an appropriate database type (for example, Amazon Aurora, Amazon
DynamoDB)
• Integrating caching to meet business requirements
Task Statement 4: Determine high-performing and/or scalable network
architectures.
Knowledge of: • Edge networking services with appropriate use cases (for example, Amazon
CloudFront, AWS Global Accelerator)
• How to design network architecture (for example, subnet tiers, routing, IP
addressing)
• Load balancing concepts (for example, Application Load Balancer)
• Network connection options (for example, AWS VPN, Direct Connect, AWS
PrivateLink)
Skills in: • Creating a network topology for various architectures (for example,
global, hybrid, multi-tier)
• Determining network configurations that can scale to accommodate future needs
• Determining the appropriate placement of resources to meet business
requirements
• Selecting the appropriate load balancing strategy
Task Statement 5: Determine high-performing data ingestion and transformation
solutions.
Knowledge of:
• Data analytics and visualization services with appropriate use cases (for
example, Amazon Athena, AWS Lake Formation, Amazon QuickSight)
• Data ingestion patterns (for example, frequency)
• Data transfer services with appropriate use cases (for example, AWS DataSync,
AWS Storage Gateway)
• Data transformation services with appropriate use cases (for example, AWS
Glue)
• Secure access to ingestion access points
• Sizes and speeds needed to meet business requirements
• Streaming data services with appropriate use cases (for example, Amazon
Kinesis)
Skills in: • Building and securing data lakes
• Designing data streaming architectures
• Designing data transfer solutions
• Implementing visualization strategies
• Selecting appropriate compute options for data processing (for example, Amazon
EMR)
• Selecting appropriate configurations for ingestion
• Transforming data between formats (for example, .csv to .parquet)
Domain 4: Design Cost-Optimized Architectures
Task Statement 1: Design cost-optimized storage solutions.
Knowledge of: • Access options (for example, an S3 bucket with Requester Pays object
storage)
• AWS cost management service features (for example, cost allocation tags,
multi-account billing)
• AWS cost management tools with appropriate use cases (for example, AWS Cost
Explorer, AWS Budgets, AWS Cost and Usage Report)
• AWS storage services with appropriate use cases (for example, Amazon FSx,
Amazon EFS, Amazon S3, Amazon EBS)
• Backup strategies
• Block storage options (for example, hard disk drive [HDD] volume types, solid
state drive [SSD] volume types)
• Data lifecycles
• Hybrid storage options (for example, DataSync, Transfer Family, Storage
Gateway)
• Storage access patterns
• Storage tiering (for example, cold tiering for object storage)
• Storage types with associated characteristics (for example, object, file,
block)
Skills in:
• Designing appropriate storage strategies (for example, batch uploads to Amazon
S3 compared with individual uploads)
• Determining the correct storage size for a workload
• Determining the lowest cost method of transferring data for a workload to AWS
storage
• Determining when storage auto scaling is required
• Managing S3 object lifecycles
• Selecting the appropriate backup and/or archival solution
• Selecting the appropriate service for data migration to storage services
• Selecting the appropriate storage tier
• Selecting the correct data lifecycle for storage
• Selecting the most cost-effective storage service for a workload
Task Statement 2: Design cost-optimized compute solutions.
Knowledge of:
• AWS cost management service features (for example, cost allocation tags,
multi-account billing)
• AWS cost management tools with appropriate use cases (for example, Cost
Explorer, AWS Budgets, AWS Cost and Usage Report)
• AWS global infrastructure (for example, Availability Zones, AWS Regions)
• AWS purchasing options (for example, Spot Instances, Reserved Instances,
Savings Plans)
• Distributed compute strategies (for example, edge processing)
• Hybrid compute options (for example, AWS Outposts, AWS Snowball Edge)
• Instance types, families, and sizes (for example, memory optimized, compute
optimized, virtualization)
• Optimization of compute utilization (for example, containers, serverless
computing, microservices)
• Scaling strategies (for example, auto scaling, hibernation)
Skills in: • Determining an appropriate load balancing strategy (for example,
Application Load Balancer [Layer 7] compared with Network Load Balancer [Layer
4] compared with Gateway Load Balancer)
• Determining appropriate scaling methods and strategies for elastic workloads
(for example, horizontal compared with vertical, EC2 hibernation)
• Determining cost-effective AWS compute services with appropriate use cases
(for example, Lambda, Amazon EC2, Fargate)
• Determining the required availability for different classes of workloads (for
example, production workloads, non-production workloads)
• Selecting the appropriate instance family for a workload
• Selecting the appropriate instance size for a workload
Task Statement 3: Design cost-optimized database solutions.
Knowledge of:
• AWS cost management service features (for example, cost allocation tags,
multi-account billing)
• AWS cost management tools with appropriate use cases (for example, Cost
Explorer, AWS Budgets, AWS Cost and Usage Report)
• Caching strategies
• Data retention policies
• Database capacity planning (for example, capacity units)
• Database connections and proxies
• Database engines with appropriate use cases (for example, heterogeneous
migrations, homogeneous migrations)
• Database replication (for example, read replicas)
• Database types and services (for example, relational compared with
non-relational, Aurora, DynamoDB)
Skills in: • Designing appropriate backup and retention policies (for example, snapshot
frequency)
• Determining an appropriate database engine (for example, MySQL compared with
PostgreSQL)
• Determining cost-effective AWS database services with appropriate use cases
(for example, DynamoDB compared with Amazon RDS, serverless)
• Determining cost-effective AWS database types (for example, time series
format, columnar format)
• Migrating database schemas and data to different locations and/or different
database engines
Task Statement 4: Design cost-optimized network architectures.
Knowledge of:
• AWS cost management service features (for example, cost allocation tags,
multi-account billing)
• AWS cost management tools with appropriate use cases (for example, Cost
Explorer, AWS Budgets, AWS Cost and Usage Report)
• Load balancing concepts (for example, Application Load Balancer)
• NAT gateways (for example, NAT instance costs compared with NAT gateway costs)
• Network connectivity (for example, private lines, dedicated lines, VPNs)
• Network routing, topology, and peering (for example, AWS Transit Gateway, VPC
peering)
• Network services with appropriate use cases (for example, DNS)
Skills in: • Configuring appropriate NAT gateway types for a network (for example, a
single shared NAT gateway compared with NAT gateways for each Availability Zone)
• Configuring appropriate network connections (for example, Direct Connect
compared with VPN compared with internet)
• Configuring appropriate network routes to minimize network transfer costs (for
example, Region to Region, Availability Zone to Availability Zone, private to
public, Global Accelerator, VPC endpoints)
• Determining strategic needs for content delivery networks (CDNs) and edge
caching
• Reviewing existing workloads for network optimizations
• Selecting an appropriate throttling strategy
• Selecting the appropriate bandwidth allocation for a network device (for
example, a single VPN compared with multiple VPNs, Direct Connect speed)
Question 1:
A software development company is using serverless computing with AWS Lambda to
build and run applications without having to set up or manage servers. They have
a Lambda function that connects to a MongoDB Atlas, which is a popular Database
as a Service (DBaaS) platform and also uses a third party API to fetch certain
data for their application. One of the developers was instructed to create the
environment variables for the MongoDB database hostname, username, and password
as well as the API credentials that will be used by the Lambda function for DEV,
SIT, UAT, and PROD environments.
Considering that the Lambda function is storing sensitive database and API
credentials, how can this information be secured to prevent other developers in
the team, or anyone, from seeing these credentials in plain text? Select the
best option that provides maximum security.
A. Enable SSL encryption that leverages on AWS CloudHSM to store and encrypt the
sensitive information.
B. AWS Lambda does not provide encryption for the environment variables. Deploy
your code to an EC2 instance instead.
C. There is no need to do anything because, by default, AWS Lambda already
encrypts the environment variables using the AWS Key Management Service.
D. Create a new KMS key and use it to enable encryption helpers that leverage on
AWS Key Management Service to store and encrypt the sensitive information.
Correct Answer: D
Explanation
When you create or update Lambda functions that use environment variables, AWS
Lambda encrypts them using the AWS Key Management Service. When your Lambda
function is invoked, those values are decrypted and made available to the Lambda
code.
The first time you create or update Lambda functions that use environment
variables in a region, a default service key is created for you automatically
within AWS KMS. This key is used to encrypt environment variables. However, if
you wish to use encryption helpers and use KMS to encrypt environment variables
after your Lambda function is created, you must create your own AWS KMS key and
choose it instead of the default key. The default key will give errors when
chosen. Creating your own key gives you more flexibility, including the ability
to create, rotate, disable, and define access controls, and to audit the
encryption keys used to protect your data.
Question 2: A company hosted an e-commerce website on an Auto Scaling group of EC2
instances behind an Application Load Balancer. The Solutions Architect noticed
that the website is receiving a large number of illegitimate external requests
from multiple systems with IP addresses that constantly change. To resolve the
performance issues, the Solutions Architect must implement a solution that would
block the illegitimate requests with minimal impact on legitimate traffic.
Which of the following options fulfills this requirement?
A. Create a regular rule in AWS WAF and associate the web ACL to an Application
Load Balancer.
B. Create a rate-based rule in AWS WAF and associate the web ACL to an
Application Load Balancer.
C. Create a custom rule in the security group of the Application Load Balancer
to block the offending requests.
D. Create a custom network ACL and associate it with the subnet of the
Application Load Balancer to block the offending requests.
Correct Answer: B
Question 4: There was an incident in your production environment where the user data
stored in the S3 bucket has been accidentally deleted by one of the Junior
DevOps Engineers. The issue was escalated to your manager and after a few days,
you were instructed to improve the security and protection of your AWS
resources.
What combination of the following options will protect the S3 objects in your
bucket from both accidental deletion and overwriting? (Select TWO.)
A. Enable Versioning
B. Enable Amazon S3 Intelligent-Tiering
C. Provide access to S3 data strictly through pre-signed URL only
D. Enable Multi-Factor Authentication Delete
E. Disallow S3 Delete using an IAM bucket policy
Correct Answer: B,D
Question 5: A popular social media website uses a CloudFront web distribution to serve
their static contents to their millions of users around the globe. They are
receiving a number of complaints recently that their users take a lot of time to
log into their website. There are also occasions when their users are getting
HTTP 504 errors. You are instructed by your manager to significantly reduce the
user’s login time to further optimize the system.
Which of the following options should you use together to set up a
cost-effective solution that can improve your application’s performance? (Select
TWO.)
A. Customize the content that the CloudFront web distribution delivers to your
users using Lambda@Edge, which allows your Lambda functions to execute the
authentication process in AWS locations closer to the users.
B. Deploy your application to multiple AWS regions to accommodate your users
around the world. Set up a Route 53 record with latency routing policy to route
incoming traffic to the region that provides the best latency to the user.
C. Configure your origin to add a Cache-Control max-age directive to your
objects, and specify the longest practical value for max-age to increase the
cache hit ratio of your CloudFront distribution.
D. Set up an origin failover by creating an origin group with two origins.
Specify one as the primary origin and the other as the second origin which
CloudFront automatically switches to when the primary origin returns specific
HTTP status code failure responses.
E. Use multiple and geographically disperse VPCs to various AWS regions then
create a transit VPC to connect all of your resources. In order to handle the
requests faster, set up Lambda functions in each region using the AWS Serverless
Application Model (SAM) service.
Correct Answer: A,D
Question 6: A company is using Amazon S3 to store frequently accessed data. When an
object is created or deleted, the S3 bucket will send an event notification to
the Amazon SQS queue. A solutions architect needs to create a solution that will
notify the development and operations team about the created or deleted objects.
Which of the following would satisfy this requirement?
A. Create a new Amazon SNS FIFO topic for the other team. Grant Amazon S3
permission to send the notification to the second SNS topic.
B. Set up another Amazon SQS queue for the other team. Grant Amazon S3
permission to send a notification to the second SQS queue.
C. Set up an Amazon SNS topic and configure two Amazon SQS queues to poll the
SNS topic. Grant Amazon S3 permission to send notifications to Amazon SNS and
update the bucket to use the new SNS topic.
D. Create an Amazon SNS topic and configure two Amazon SQS queues to subscribe
to the topic. Grant Amazon S3 permission to send notifications to Amazon SNS and
update the bucket to use the new SNS topic.
Correct Answer: D
Appendix Which key tools, technologies, and concepts might be covered on the exam?
The following is a non-exhaustive list of the tools and technologies that could
appear on the exam. This list is subject to change and is provided to help you
understand the general scope of services, features, or technologies on the exam.
The general tools and technologies in this list appear in no particular order.
AWS services are grouped according to their primary functions. While some of
these technologies will likely be covered more than others on the exam, the
order and placement of them in this list is no indication of relative weight or
importance:
• Compute
• Cost management
• Database
• Disaster recovery
• High performance
• Management and governance
• Microservices and component decoupling
• Migration and data transfer
• Networking, connectivity, and content delivery
• Resiliency
• Security
• Serverless and event-driven design principles
• Storage