Category Archives: Amazon

Amazon AWS Certified DevOps Engineer Professional Exam

The AWS Certified DevOps Engineer – Professional exam is intended for individuals who perform a DevOps engineer role with two or more years of experience provisioning, operating, and managing AWS environments.

Abilities Validated by the Certification
Implement and manage continuous delivery systems and methodologies on AWS
Implement and automate security controls, governance processes, and compliance validation
Define and deploy monitoring, metrics, and logging systems on AWS
Implement systems that are highly available, scalable, and self-healing on the AWS platform
Design, manage, and maintain tools to automate operational processes

Recommended Knowledge and Experience
Experience developing code in at least one high-level programming language
Experience building highly automated infrastructures
Experience administering operating systems
Understanding of modern development and operations processes and methodologies

Prepare for Your Exam
There is no better preparation than hands-on experience. There are many relevant AWS Training courses and other resources to assist you with acquiring additional knowledge and skills to prepare for certification. Please review the exam guide for information about the competencies assessed on the certification exam.

Introduction
The AWS Certified DevOps Engineer – Professional (DOP-CO1) exam validates technical expertise in provisioning, operating, and managing distributed application systems on the AWS platform. It is intended for individuals who perform a DevOps Engineer role.

It validates an examinee’s ability to:
 Implement and manage continuous delivery systems and methodologies on AWS
 Implement and automate security controls, governance processes, and compliance validation
 Define and deploy monitoring, metrics, and logging systems on AWS
 Implement systems that are highly available, scalable, and self-healing on the AWS platform
 Design, manage, and maintain tools to automate operational processes

Recommended AWS Knowledge
 Two or more years’ experience provisioning, operating, and managing AWS environments
 Experience developing code in at least one high-level programming language
 Experience building highly automated infrastructures
 Experience administering operating systems
 Understanding of modern development and operations processes and methodologies

Exam Preparation
These training courses and materials may be helpful for examination preparation:

AWS Training (aws.amazon.com/training)
 DevOps Engineering on AWS – https://aws.amazon.com/training/course-descriptions/devops-engineering/

There are two types of questions on the examination:
 Multiple-choice: Has one correct response option and three incorrect responses (distractors).
 Multiple-response: Has two or more correct responses out of five or more options.

Select one or more responses that best complete the statement or answer the question. Distractors, or incorrect answers, are response options that an examinee with incomplete knowledge or skill would likely choose. However, they are generally plausible responses that fit in the content area defined by the test objective.
Unanswered questions are scored as incorrect; there is no penalty for guessing.

Unscored Content
Your examination may include unscored items that are placed on the test to gather statistical information. These items are not identified on the form and do not affect your score.

Exam Results
The AWS Certified DevOps Engineer Professional (DOP-C01) is a pass or fail exam. The examination is scored against a minimum standard established by AWS professionals who are guided by certification industry best practices and guidelines.

Your results for the examination are reported as a score from 100-1000, with a minimum passing score of 750. Your score shows how you performed on the examination as a whole and whether or not you passed. Scaled scoring models are used to equate scores across multiple exam forms that may have slightly different difficulty levels.
Your score report contains a table of classifications of your performance at each section level. This information is designed to provide general feedback concerning your examination performance. The examination uses a compensatory scoring model, which means that you do not need to “pass” the individual sections, only the overall examination. Each section of the examination has a specific weighting, so some sections have more questions than others. The table contains general information, highlighting your strengths and weaknesses. Exercise caution when interpreting section-level feedback.

Content Outline
This exam guide includes weightings, test domains, and objectives only. It is not a comprehensive listing of the content on this examination. The table below lists the main content domains and their weightings.

Domain 1: SDLC Automation 22%
Domain 2: Configuration Management and Infrastructure as Code 19%
Domain 3: Monitoring and Logging 15%
Domain 4: Policies and Standards Automation 10%
Domain 5: Incident and Event Response 18%
Domain 6: High Availability, Fault Tolerance, and Disaster Recovery 16%

1.2 Determine source control strategies and how to implement them
1.3 Apply concepts required to automate and integrate testing
1.4 Apply concepts required to build and manage artifacts securely
1.5 Determine deployment/delivery strategies (e.g., A/B, Blue/green, Canary, Red/black) and how to implement them using AWS Services

Domain 2: Configuration Management and Infrastructure as Code
2.1 Determine deployment services based on deployment needs
2.2 Determine application and infrastructure deployment models based on business needs
2.3 Apply security concepts in the automation of resource provisioning
2.4 Determine how to implement lifecycle hooks on a deployment
2.5 Apply concepts required to manage systems using AWS configuration management tools and services

Domain 3: Monitoring and Logging
3.1 Determine how to set up the aggregation, storage, and analysis of logs and metrics
3.2 Apply concepts required to automate monitoring and event management of an environment
3.3 Apply concepts required to audit, log, and monitor operating systems, infrastructures, and applications
3.4 Determine how to implement tagging and other metadata strategies

Domain 4: Policies and Standards Automation
4.1 Apply concepts required to enforce standards for logging, metrics, monitoring, testing, and security
4.2 Determine how to optimize cost through automation
4.3 Apply concepts required to implement governance strategies

Domain 5: Incident and Event Response
5.1 Troubleshoot issues and determine how to restore operations
5.2 Determine how to automate event management and alerting
5.3 Apply concepts required to implement automated healing
5.4 Apply concepts required to set up event-driven automated actions

Domain 6: High Availability, Fault Tolerance, and Disaster Recovery
6.1 Determine appropriate use of multi-AZ versus multi-region architectures
6.2 Determine how to implement high availability, scalability, and fault tolerance
6.3 Determine the right services based on business needs (e.g., RTO/RPO, cost)
6.4 Determine how to design and automate disaster recovery strategies
6.5 Evaluate a deployment for points of failure

QUESTION: 1
You have an application which consists of EC2 instances in an Auto Scaling group. Between a
particular time frame every day, there is an increase in traffic to your website. Hence users are
complaining of a poor response time on the application. You have configured your Auto Scaling group
to deploy one new EC2 instance when CPU utilization is greater than 60% for 2 consecutive periods of 5 minutes.
What is the least cost-effective way to resolve this problem?

A. Decrease the consecutive number of collection periods
B. Increase the minimum number of instances in the Auto Scaling group
C. Decrease the collection period to ten minutes
D. Decrease the threshold CPU utilization percentage at which to deploy a new instance

Answer: B

QUESTION: 2
You have decided that you need to change the instance type of your production instances which are
running as part of an AutoScaling group. The entire architecture is deployed using CloudFormation
Template. You currently have 4 instances in Production. You cannot have any interruption in service
and need to ensure 2 instances are always running during the update? Which of the options below
listed can be used for this?

A. AutoScalingRollingUpdate
B. AutoScalingScheduledAction
C. AutoScalingReplacingUpdate
D. AutoScalinglntegrationUpdate

Answer: A

QUESTION: 3
You currently have the following setup in AWS
1) An Elastic Load Balancer
2) Auto Scaling Group which launches EC2 Instances
3) AMIs with your code pre-installed
You want to deploy the updates of your app to only a certain number of users. You want to have a
cost-effective solution. You should also be able to revert back quickly. Which of the below solutions is the most feasible one?

A. Create a second ELB, and a new Auto Scaling Group assigned a new Launch Configuration. Create a
new AMI with the updated app. Use Route53 Weighted Round Robin records to adjust the proportion of traffic hitting the two ELBs.
B. Create new AM Is with the new app. Then use the new EC2 instances in half proportion to the older instances.
C. Redeploy with AWS Elastic Beanstalk and Elastic Beanstalk versions. Use Route 53 Weighted Round
Robin records to adjust the proportion of traffic hitting the two ELBs
D. Create a full second stack of instances, cut the DNS over to the new stack of instances, and change the DNS back if a rollback is needed.

Answer: A

QUESTION: 4
Your application is currently running on Amazon EC2 instances behind a load balancer. Your
management has decided to use a Blue/Green deployment strategy. How should you implement this for each deployment?

A. Set up Amazon Route 53 health checks to fail over from any Amazon EC2 instance that is currently being deployed to.
B. Using AWS CloudFormation, create a test stack for validating the code, and then deploy the code
to each production Amazon EC2 instance.
C. Create a new load balancer with new Amazon EC2 instances, carry out the deployment, and then
switch DNS over to the new load balancer using Amazon Route 53 after testing.
D. Launch more Amazon EC2 instances to ensure high availability, de-register each Amazon EC2
instance from the load balancer, upgrade it, and test it, and then register it again with the load balancer.

Answer: C

QUESTION: 5
You have an application running a specific process that is critical to the application’s functionality,
and have added the health check process to your Auto Scaling Group. The instances are showing
healthy but the application itself is not working as it should. What could be the issue with the health check, since it is still showing the instances as healthy.

A. You do not have the time range in the health check properly configured
B. It is not possible for a health check to monitor a process that involves the application
C. The health check is not configured properly
D. The health check is not checking the application process

Answer: D

Certkingdom Review, Certkingdom Amazon AWS Certified DevOps Engineer Professional PDF

MCTS Training, MCITP Trainnig

Best Amazon AWS Certified DevOps Engineer Professional Certification, Amazon AWS Certified DevOps Engineer Professional Training at certkingdom.com

Amazon DBS-C01 DBS-C01 AWS Certified Database – Specialty Exam

Earn an industry-recognized credential from AWS that validates your expertise in the breadth of AWS database services and accelerating the use of database technology to drive your organization’s business transformation. Build credibility and confidence by highlighting your ability to design, recommend, and maintain the optimal AWS database solution for a use case.

Abilities Validated by the Certification
Understand and differentiate the key features of AWS database services
Analyze needs and requirements to recommend and design appropriate database solutions using AWS services

Recommended Knowledge and Experience
At least 5 years of experience with database technologies
At least 2 years of hands-on experience working on AWS
Experience and expertise working with on-premises and AWS-Cloud-based relational and nonrelational databases

Prepare for Your Exam
There is no better preparation than hands-on experience. Review the exam guide for information about the competencies assessed on this certification exam. You can also review the sample questions for format examples or take a practice exam.

Looking for more resources to help build your database expertise? Explore options including an AWS Database Learning Path, exam readiness training, an in-depth AWS Database Ramp-Up Guide, suggested whitepapers and FAQs, and more.

Introduction
The AWS Certified Database – Specialty (DBS-C01) examination is intended for individuals who perform in a database-focused role. This exam validates an examinee’s comprehensive understanding of databases, including the concepts of design, migration, deployment, access, maintenance, automation, monitoring, security, and troubleshooting.

It validates an examinee’s ability to:
 Understand and differentiate the key features of AWS database services.
 Analyze needs and requirements to design and recommend appropriate database solutions using AWS services.

Recommended AWS Knowledge
 A minimum of 5 years of experience with common database technologies
 At least 2 years of hands-on experience working on AWS
 Experience and expertise working with on-premises and AWS Cloud-based relational and NoSQL databases

Exam Content

Response Types
There are two types of questions on the examination:
 Multiple choice: Has one correct response and three incorrect responses (distractors).
 Multiple response: Has two or more correct responses out of five or more options.

Select one or more responses that best complete the statement or answer the question. Distractors, or incorrect answers, are response options that an examinee with incomplete knowledge or skill would likely choose. However, they are generally plausible responses that fit in the content area defined by the test objective.
Unanswered questions are scored as incorrect; there is no penalty for guessing.

Unscored Content
Your examination may include unscored items that are placed on the test to gather statistical information. These items are not identified on the form and do not affect your score.

Exam Results
The AWS Certified Database – Specialty (DBS-C01) examination is a pass or fail exam. The examination is scored against a minimum standard established by AWS professionals who are guided by certification industry best practices and guidelines.

Your results for the examination are reported as a score from 100–1,000, with a minimum passing score of 750. Your score shows how you performed on the examination as a whole and whether or not you passed. Scaled scoring models are used to equate scores across multiple exam forms that may have slightly different difficulty levels.

Your score report contains a table of classifications of your performance at each section level. This information is designed to provide general feedback concerning your examination performance. The examination uses a compensatory scoring model, which means that you do not need to “pass” the individual sections, only the overall examination. Each section of the examination has a specific weighting, so some sections have more questions than

others. The table contains general information, highlighting your strengths and weaknesses. Exercise caution when interpreting section-level feedback.

Content Outline
This exam guide includes weightings, test domains, and objectives only. It is not a comprehensive listing of the content on this examination. The table below lists the main content domains and their weightings.

Domain 1: Workload-Specific Database Design 26%
Domain 2: Deployment and Migration 20%
Domain 3: Management and Operations 18%
Domain 4: Monitoring and Troubleshooting 18%
Domain 5: Database Security 18%

TOTAL 100%

Domain 1: Workload-Specific Database Design
1.1 Select appropriate database services for specific types of data and workloads
1.2 Determine strategies for disaster recovery and high availability
1.3 Design database solutions for performance, compliance, and scalability
1.4 Compare the costs of database solutions

Domain 2: Deployment and Migration
2.1 Automate database solution deployments
2.2 Determine data preparation and migration strategies
2.3 Execute and validate data migration

Domain 3: Management and Operations
3.1 Determine maintenance tasks and processes
3.2 Determine backup and restore strategies
3.3 Manage the operational environment of a database solution

Domain 4: Monitoring and Troubleshooting
4.1 Determine monitoring and alerting strategies
4.2 Troubleshoot and resolve common database issues
4.3 Optimize database performance

Domain 5: Database Security
5.1 Encrypt data at rest and in transit
5.2 Evaluate auditing solutions
5.3 Determine access control and authentication mechanisms
5.4 Recognize potential security vulnerabilities within database solutions

QUESTION 1
A company has deployed an e-commerce web application in a new AWS account. An Amazon RDS for
MySQL Multi-AZ DB instance is part of this deployment with a database-1.xxxxxxxxxxxx.us-east-
1.rds.amazonaws.com endpoint listening on port 3306. The company’s Database Specialist is able to log in to
MySQL and run queries from the bastion host using these details.
When users try to utilize the application hosted in the AWS account, they are presented with a generic error
message. The application servers are logging a “could not connect to server: Connection times out” error
message to Amazon CloudWatch Logs.
What is the cause of this error?

A. The user name and password the application is using are incorrect.
B. The security group assigned to the application servers does not have the necessary rules to allow inbound connections from the DB instance.
C. The security group assigned to the DB instance does not have the necessary rules to allow inbound connections from the application servers.
D. The user name and password are correct, but the user is not authorized to use the DB instance.

Correct Answer: C

QUESTION 2
An AWS CloudFormation stack that included an Amazon RDS DB instance was accidentally deleted and
recent data was lost. A Database Specialist needs to add RDS settings to the CloudFormation template to
reduce the chance of accidental instance data loss in the future.
Which settings will meet this requirement? (Choose three.)

A. Set DeletionProtection to True
B. Set MultiAZ to True
C. Set TerminationProtection to True
D. Set DeleteAutomatedBackups to False
E. Set DeletionPolicy to Delete
F. Set DeletionPolicy to Retain

Correct Answer: ACF

QUESTION 3
A Database Specialist is troubleshooting an application connection failure on an Amazon Aurora DB cluster
with multiple Aurora Replicas that had been running with no issues for the past 2 months. The connection
failure lasted for 5 minutes and corrected itself after that. The Database Specialist reviewed the Amazon RDS
events and determined a failover event occurred at that time. The failover process took around 15 seconds to complete.
What is the MOST likely cause of the 5-minute connection outage?

A. After a database crash, Aurora needed to replay the redo log from the last database checkpoint
B. The client-side application is caching the DNS data and its TTL is set too high
C. After failover, the Aurora DB cluster needs time to warm up before accepting client connections
D. There were no active Aurora Replicas in the Aurora DB cluster

Correct Answer: C

QUESTION 4
A company is deploying a solution in Amazon Aurora by migrating from an on-premises system. The IT
department has established an AWS Direct Connect link from the company’s data center. The company’s
Database Specialist has selected the option to require SSL/TLS for connectivity to prevent plaintext data from
being set over the network. The migration appears to be working successfully, and the data can be queried from a desktop machine.
Two Data Analysts have been asked to query and validate the data in the new Aurora DB cluster. Both
Analysts are unable to connect to Aurora. Their user names and passwords have been verified as valid and
the Database Specialist can connect to the DB cluster using their accounts. The Database Specialist also
verified that the security group configuration allows network from all corporate IP addresses.
What should the Database Specialist do to correct the Data Analysts’ inability to connect?

A. Restart the DB cluster to apply the SSL change.
B. Instruct the Data Analysts to download the root certificate and use the SSL certificate on the connection string to connect.
C. Add explicit mappings between the Data Analysts’ IP addresses and the instance in the security group assigned to the DB cluster.
D. Modify the Data Analysts’ local client firewall to allow network traffic to AWS.

Correct Answer: D


 

Certkingdom Review, Certkingdom Amazon DBS-C01 AWS Certified Database PDF

MCTS Training, MCITP Trainnig

Best Amazon DBS-C01 AWS Certified Database Certification, Amazon DBS-C01 AWS Certified Database Training at certkingdom.com

2015 technology industry graveyard

2015 technology industry graveyard

Cisco, Microsoft, Google and others bury outdated technologies to move ahead with new ones.

Ba-bye
The Technology Industry Graveyard is pretty darn full in 2015, and we’re not even including the near-dead such as RadioShack and Microsoft’s IE browser. Pay your respects here…

GrooveShark
The self-described “World’s Music Library” is no more after shutting down in April in the wake of serious legal pressure by music companies whose songs GrooveShark allowed to be shared but had never licensed. Apple and Google had each kicked GrooveShark out of their app stores years ago due to complaints from music labels. Much more sadly than the 9-year-old company’s demise, however, was the death of co-founder Josh Greenberg in July at the age of just 28.

Typo iPhone keyboard
Not even the glamor of being co-founded by American Idol host Ryan Seacrest could help Typo Innovations save its iPhone keyboard, which BlackBerry said infringed on its patents. So instead, Typo bailed on the iPhone model and settled for selling ones for devices with screens 7.9-inches or larger (like iPads).

Amazon Fire Phone
With a product name like Fire, you’re just asking for colorful headlines if it bombs. And indeed, Amazon has stopped making its Fire Phone about a year after introducing it and media outlets were quick to highlight the company “extinguishing” it or remarking on the phone being “burnt out.” Amazon has had some success on the hardware front, namely with its Kindle line, but the Fire just didn’t distinguish itself and was going for free with a carrier contract by the end.

Interop New York
Interop Las Vegas carries on as one of the network industry’s top trade shows next May, but little sibling Interop New York is no more this year. The Fall show, traditionally held at the Javits Center since 2005, was always smaller and was discontinued for 2015 despite lively marketing material last year touting “More Than 30 Interop New York Exhibitors and Sponsors to Make Announcements in Anticipation of the Event.”

GTalk
Google ditched so many things in 2015 that we devoted an entire slideshow to Google’s Graveyard. So to choose just one representative item here, we remember Google Talk, which had a good run, starting up in 2005. But it’s never good when Google pulls out the term “deprecated” as it did in February in reference to this chat service’s Windows App. Google said it was pulling the plug on GTalk in part to focus on Google Hangouts in a world where people have plenty of other ways to chat online. However, Google Talk does live on via third-party apps.

Cisco Invicta storage products
Cisco has a good touch when it comes to acquisitions, but its $415 mlllion WHIPTAIL buyout from 2013 didn’t work out. The company in July revealed it had pulled the plug on its Invicta flash storage appliances acquired via that deal. It’s not unthinkable though that Cisco could go after another storage company, especially in light of the Dell-EMC union.

RapidShare
The once-popular file hosting system, begun in 2002, couldn’t withstand the onslaught of competition from all sides, including Google and Dropbox. Back in 2009, the Switzerland-based operation ran one of the Internet’s 20 most visited websites, according to Wikipedia. It shut down on March 31, and users’ leftover files went away with it.

Windows RT devices
This locked-down Microsoft OS for tablets and convertible laptops fared about as well as Windows 8, after being introduced as a prototype in 2011 at the big CES event in Las Vegas. Microsoft’s software for the 32-bit ARM architecture was intended to enable devices to exploit that architecture’s power efficiency, but overall, the offering proved to be a funky fit with existing Windows software. Production of RT devices stopped earlier in 2015 as Microsoft focuses on Win10 and more professional-focused Surface devices.

OpenStack vendor Nebula
As Network World’s Brandon Butler wrote in April, Nebula became one of the first casualties of the open source OpenStack cloud computing movement when it shuttered its doors. The company, whose founder was CIO for IT at NASA before starting Nebula in 2011, suggested in its farewell letter that it was a bit ahead of its time, unable to convert its $38 million in funding and hardware/software appliances into a sustainable business.

FriendFeed
Facebook bought this social news and information feed aggregator in 2009, two years after the smaller business started, and then killed it off in April. People have moved on to other means of gathering and discovering info online, so FriendFeed died from lack of use. It did inspire the very singular website, Is FriendFeed Dead Yet, however, so its legacy lives on.

Apple Aperture
Apple put the final nails in its Aperture photo editing app in 2015, ending the professional-quality post-production app’s 10-year run at Version 3.6. In its place, Apple introduced its Photos app for users of both its OS X Mac and iOS devices.

Secret
One of the co-founders of anonymous sharing app shared this in April: The company was shutting down and returning whatever part of its $35 million in funding was left. The company’s reality was just not going to meet up with his vision for it, said co-founder David Byttow. The company faced criticism that it, like other anonymous apps such as Yik Yak, allowed for cyberbullying.

Amazon Wallet
Amazon started the year by announcing its Wallet app, the company’s 6-month-old attempt to get into mobile payments, was a bust. The app, which had been in beta, allowed users to store their gift/loyalty/rewards cards, but not debit or credit cards as they can with Apple and Google mobile payment services.

Circa News app
Expired apps could easily fill an entire tech graveyard, so we won’t document all of their deaths here. But among them not making it through 2015 was Circa, which reportedly garnered some $4 million in venture funding since starting in 2012 but didn’t get enough takers for its app-y brand of journalism.

 

Click here to view complete Q&A of 70-355 exam

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-355 Training at certkingdom.com

 

5 key takeaways from Amazon’s big cloud day

Amazon challenges Box with file share services, attempts to woo mobile app developers

Amazon Web Services continued to push the IaaS market forward today by challenging established cloud players like Box and Dropbox with the company’s own document collaboration platform and rolling out new features to its public cloud focused on supporting mobile applications.

Here are the five biggest takeaways from Amazon’s Summit in New York City today:

Amazon’s cloud targets mobile applications
Amazon launched a number of new features to optimize its cloud for hosting mobile apps. The main new product is named Cognito and it provides shortcuts for mobile application developers. The idea is that there are a variety of core features that many mobile apps need that do not differentiate the app from others, says AWS VP of Mobile Marco Argneti. These include the ability to save user profiles and provide support across multiple devices, and save the state of the app when a user changes devices. Cognito provides these services so that app developers don’t have to build them, and it allows the developers to focus on the truly differentiated features of their app. The logon credentials integrate with Facebook, Google and Amazon usernames and passwords. Here’s a video describing the service from Amazon:

The move shows that in addition to being at the forefront of hosting startups and enterprise workloads, AWS wants to be the place to host mobile apps, too. It also shows Amazon turning into more of an application development platform as a service (PaaS) and Mobile Backend as a Service (MBaaS). Amazon isn’t alone though. Microsoft has a robust set of tools for hosting mobile applications as well. Time Warner Cable’s NaviSite rolled out new Enterprise Mobility Management tools this week for managing mobile workforces, which VMware is heavily invested.

Amazon launches document collaboration and file sharing business
Amazon announced Zocalo, a new file storage, sharing and synchronization platform based on its popular Simple Storage Service (S3). Think of it as Box or Google Drive, but in Amazon’s cloud and aimed at the enterprise market. Through a slick web interface, users can upload a variety of files — documents, PDFs, slides, spreadsheets and photos, among others — and synchronize them across devices that have a Zocalo client installed on them. Users can share documents and can also provide and solicit feedback.

Improvements in 10GbE technology, lower pricing, and improved performance make 10GbE for the mid-market
The move puts Amazon in direct competition with some darlings of the consumer cloud marketplace, like Box and DropBox and puts Amazon head to head with Google, again (those two companies compete on the IaaS cloud platform too). The move follows Amazon’s launch of Workspaces, a virtual desktop tool it debuted last year.

Amazon targets, and shows off, enterprise customers
Perhaps equally as important as the new products launched were the portions of the keynote where Amazon customers shared their experiences using the company’s platform. One perception the company is attempting to overcome is that it is focused on startups and developers, but not enterprise users. One way to get more enterprise customers is to show nervous potentially customers that their peers are using your platform.

Vogels outlined how startup Airbnb – which processes 150,000 stays per night on its site – has grown from using about 400 Elastic Compute Cloud (EC2) servers a year ago to now more than 1,300. The company has a five-person IT team that manages it all in Amazon’s cloud. Siemens, which had $5.5 billion in sales last year, uses Amazon’s cloud to process HIPAA-complaint diagnostic images. Publishing company Conde Nast is selling its data center and servers because it’s moving into AWS’s cloud. It’s one thing to have enterprise customers using your cloud-based platform in some small test and development capacity; it’s another for them to be shutting down data centers in favor of using the public cloud.

Amazon eats its partners
Another new product the company launched today is named Logs for CloudWatch. Last year Amazon released CloudTrail, which is a stream of information customers can sign up for that reports every action that is made in a user’s account. That information alone is not extraordinarily valuable because it needs to be processed in a way that makes sense. A variety of third-party AWS partners have taken that data and made applications out of it that customers can use to track their cloud usage and find unusual behavior. Today, Amazon rolled out some of those features themselves.

The point here is that Amazon continues to develop features in its cloud, even if it has partnering companies who do the same thing. AWS has done this before; it made life difficult for companies that had built up cost optimization tools when it launched its own service that does the same thing named Trusted Advisor. It can be tough being an Amazon partner; the key for these vendors is staying ahead of Amazon’s fast innovation cycle.

More than 10,000 people registered to attend Amazon Web Service’s Summit in New York Today.
One of the most notable aspects of the day was the amount of interest it drew. AWS said that more than 10,000 people registered to attend the event, which included a keynote by CTO Werner Vogels and then breakout sessions throughout the afternoon. Thousands of others watched a live stream. The biggest takeaway of all is that the cloud is real, and a lot of people are interested in it.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

 

Microsoft taps partners to sell Azure and take on Amazon in cloud

Microsoft has a big channel and soon it will arm them with Azure

Microsoft distributors will soon be able to resell the company’s Azure cloud IaaS, Microsoft announced today in a blog.

The move is welcomed by at least some Microsoft partners who are excited about the opportunity of offering customers more services. Previously Azure users had to buy directly from Microsoft.

Microsoft is allowing its distributors to resell Azure by expanding the company’s Open Licensing practice. This allows the distributor to resell an Azure license to a customer. Vendors will use tokens which are worth $100 Azure credits that will be distributed to customers. Doing so will give resellers the ability to manage Azure clouds for customers, as well as bundle other services on top of the Azure virtual machines, storage and databases. Resellers can bundle an Office 365 app, or backup and recovery services, for example.

Customer may be more inclined to buy services like Azure through a partner that they have an existing relationship with as opposed to buying directly through an Enterprise Agreement with Microsoft.

Aidan Finn is a technical sales lead for Microsoft reseller MicroWarehouse Ltd. in Ireland and in blogging about the news today noted that it will make Azure more appealing, particularly to small and midsize businesses that may not be big enough for an EA but perhaps don’t have the in-house expertise to consume cloud without assistance. Some partners were even already buying blocks of Azure services and reselling them without the official Open License, Finn said. “The move to Open was necessary,” Finn wrote in an email. “The opportunity to resell a service product brings partners into the fold, and gives them a reason to be interested. Without a resell opportunity, Azure could have appeared like a competitor to some partners.”

The move seems like a natural one for Microsoft to make to take advantage of its large channel market, which is a differentiator for the company compared to some of its biggest rivals in Amazon and Google.

“The mechanism sounds a bit awkward, with the purchases via fixed denomination tokens rather than direct, resource utilization-based billing, but giving more efficient access to its large partner channel to its portfolio of cloud resources is good utilization of an existing strength for the company,” Stephen O’Grady, an analyst and RedMonk wrote in an email.

Microsoft announced that Open Licensing will be available starting in August.

 


 

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Amazon’s biggest competitor in the cloud: Salesforce.com?

Amazon’s biggest competitor in the cloud: Salesforce.com?
Amazon and Salesforce are each pioneering companies in cloud computing, but are they competitors?

Who is Amazon’s biggest competitor in the cloud?
The go-to answer for many may be companies like Rackspace with its OpenStack platform, perhaps Google with its Compute Engine, Microsoft Azure, VMware or one of the up-and-coming cloud computing companies like Joyent.

But Mikhail Malamud, founder of cloud consultancy startup CloudAware, says another cloud company could pose the biggest challenge to Amazon’s cloud plans: Salesforce.com.

These two companies, Amazon Web Services and Salesforce.com, are two of the leading cloud providers in their respective markets of infrastructure as a service (IaaS) for AWS and software as a service (SaaS) for Salesforce.com. But Malamud believes there is one reason why Salesforce.com could be a formidable foe for Amazon in the cloud moving forward: data.

Salesforce.com’s data stash
“Data is the kingmaker in the cloud,” says Malamud, whose firm, CloudAware, provides a platform to access AWS resources.

Salesforce.com has an enormous cache of customer data, and not just any data, but some of enterprises’ most valuable data — customer information. Salesforce.com has data about who its users’ customers are, what interactions they have with those customers, and increasingly it’s been attempting to collect even more data, from human resource management to social data.

And Salesforce is building an ecosystem of products and services around that data. While the company may be best known as a SaaS-based customer relationship management (CRM) application, it also has a robust platform that allows customers to build new applications on its cloud.

Force.com and Heroku, the latter of which Salesforce acquired in 2010, are platform as a service (PaaS) tools allowing customers to leverage CRM data already in Salesforce’s cloud and build related applications that are customized to individual users’ needs. It’s where Malamud built his company’s app. A Salesforce CRM customer, for example, could build an application on Force.com that integrates with the CRM application to analyze the sales data. And Malamud says every new application that’s built in Salesforce.com’s environment is one less app that’s running in Amazon’s cloud.

Amazon: We’ve got data too
Amazon is responding in turn, though. In the past year AWS has made a concerted effort to manage more of its customers’ data. Announcements like Red Shift — the company’s headline announcement at its first-annual users conference, named re: Invent — is a new data warehousing service, meant to be a low-cost alternative to expensive on-premises database storage systems. Amazon Glacier is a “cold storage” service for storing a company’s long-term data, while Data Pipeline is a relatively new service that makes it easier to transfer all that data between various applications within Amazon’s cloud. “They’re clearly trying to get as much of your data as possible,” Malamud says.

Malamud says Salesforce will be the place where next-generation apps will be built, providing a legitimate threat to Amazon moving forward.
“It’s a legitimate theory, but it’s more of a longer term play,” says David Vellante, chief analyst at research firm The Wikibon Project, about the Salesforce.com-Amazon rivalry. The two companies are not really direct competitors right now, he says. They’re both cloud-based, but AWS at its core is about providing fast, easy and cheap access to virtual machines, storage and hosted applications in its IaaS cloud. Salesforce.com is a SaaS that is attempting to build up its accompanying PaaS.

Amazon’s bigger near-term competitors are the growing cavalry of IaaS providers looking to steal business from the company, he says. Google, Microsoft and Rackspace (with its OpenStack platform), as well as VMware, HP, Dell, Joyent, Terremark and Savvis, are just some of the whole range of IaaS providers looking to bite into Amazon’s market share that pose a more immediate threat to AWS.

Robert Mahowald, research vice president at IDC who leads the software as a service (SaaS) and cloud services practice, agrees with Vellante. “It’s not necessarily where the companies are today, but it’s certainly an aspiration of Salesforce,” he says. But he’s also on board with Malamud’s core premise of “follow the data.”

Applications that run in the cloud are fundamentally more important than the infrastructure they run on, so in that sense Salesforce has an advantage in being able to offer customers products, services and platforms that leverage data already in its cloud.

But AWS is a heavy-hitter in the cloud, too. Through partnerships with big enterprise software giants like SAP, Oracle and Microsoft, AWS allows customers to migrate their existing enterprise software licenses to Amazon’s cloud and let AWS worry about all the underlying infrastructure.

Salesforce.com has a different business model: The company isn’t pushing customers to migrate their SAP, Oracle and Microsoft apps into its cloud; they want customers to be all-in with its own cloud. So far, the company has done an extraordinary job capturing the CRM market, but existing business apps aren’t being migrated into Salesforce’s cloud.

To Mahowald, that means the Amazon vs. Salesforce debate comes down to a new vs. existing apps debate. Amazon has everything in place to give customers the opportunity to outsource their packaged software onto its cloud, something enterprises are becoming more and more comfortable with. Salesforce wants to be the place where the enterprises’ next-generation business apps are built and stored.

The problem for AWS is that there are increasingly more and more competitors offering similar IaaS services. To date, Amazon has simply done it better than its competitors, Vellante says — it out-innovates competitors, has a broader range of services and continually lowers its prices. It’s tough for competitors to keep up, but a crop of providers are trying.

Some providers are carving out niches in vertical markets, offering healthcare-, government- or financial services-focused clouds, for example. Others are banking on the hybrid cloud — which combines both on-premises and public cloud resources — as being the future the industry. VMware, sensing an opportunity in the market, recently announced plans to create a hybrid cloud offering.

Salesforce isn’t competing with those offerings, though, Vellante says. Salesforce has found a niche in its ecosystem of customers and is nurturing and growing it. But Salesforce.com is not the be all and end all of cloud service providers now or into the future. “If you’re running a big data app and you need a 10-node cluster spun up today to host your analytics app, you’re not going to Salesforce,” Vellante says. “You’re going to Amazon or another IaaS.” It’s a different play for each of the providers, which is why Vellante says both of these companies — AWS and Salesforce.com — will be around for a long time, and they both likely will make a lot of money in the cloud.

 


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

 

Amazon Web Services launches CloudSearch

The service is based on Amazon’s A9 technology and aims to simplify the use of search

IDG News Service – Amazon Web Services has introduced CloudSearch, which allows users of its cloud to integrate fully managed and highly scalable search functionality into their applications, the company said on Thursday.

CloudSearch is based on the same A9 technology that powers search for Amazon.com, the company said

To use the search functionality, IT staff start by creating a search domain and uploading the data they want searchable. CloudSearch then automatically provisions the technology resources required and the indexes needed, the company said.

To make data searchable, it first needs to be described in the Search Data Format, which can be done using JSON (JavaScript Object Notation) or XML text files, according to an FAQ.

MCTS Certification, MCITP Certification

Microsoft MCTS Certification, MCITP Certification and over 2000+
Exams with Life Time Access Membership at https:://www.actualkey.com

CloudSearch automatically scales as the amount of searchable data increases or as the query rate changes, and enterprises can change search parameters, fine tune search relevance and apply new settings at any time without having to upload the data again.

Settings are changed using the AWS Management Console, which is also used to administer Amazon’s other services.

As with its other cloud services, Amazon pitches the new addition as way to add search capabilities without needing a lot of expertise.

However, figuring out what the service will cost may not be as simple. Users are billed on a monthly basis for search instances, document batch uploads, IndexDocuments requests and data transfer.

As a managed service, CloudSearch determines the size and number of search instances required to deliver low latency, high throughput search performance. The service builds an index and picks the appropriate initial search instance type to ensure that the index can be stored in RAM.

Instance types come in small, large and extra large, which cost 12 cents, 48 cents and 68 cents per hour.

New IndexDocuments are created by CloudSearch when the IT staff make configuration changes to the index, for example by adding a field. The cost is 98 cents per gigabyte of data stored in the search domain.

Added to that is 10 cents per 1,000 batch upload requests, which each can be up to 5MB. The last part of the bill is a charge for the amount of data transferred out of CloudSearch. In the US East region, the first 10TB costs 12 cents per GB, according to Amazon.

On the CloudSearch website, Amazon details a cost example that includes 100MB of data and adds up to $86.94 per month.

MCTS Certification, MCITP Certification

Microsoft MCTS Certification, MCITP Certification and over 2000+
Exams with Life Time Access Membership at https:://www.actualkey.com

How To Hide Your Data

Want to keep your private files under wraps without making it obvious they’re important? Rather than encryption, try hiding them, so prying eyes don’t even know they exist.

We live in a world where data rules. Sharing your files, from docs to pictures to videos is as easy as breathing. But we’ve all got some stuff that we’d like to keep to ourselves. It could be data files that are so important to your company that your job hangs in the balance. Maybe you have secret plans drawn up on your home PC, and you don’t want busybody siblings, parents, spouses, or offspring peaking at them. Perhaps you travel and, on principle, you don’t want “The Man” getting into your stuff even legally—customs agents and border patrol can delay you plenty if you don’t show them what they want. Occasionally, we all need to make sure some of our important files aren’t open to all.

 

Best Microsoft MCTS Training, Microsoft MCITP Training at certkingdom.com

 

The typical method for file protection is encryption—the process of turning your information into unreadable junk that can’t be opened without a password. However, encrypting a file is like sticking a red flag in the data that says, “Look at me! I’m ever so important and clandestine! Please, obsess over cracking my cipher! Decrypt me and you’ll know all my secrets.”

Thankfully, there is a better way—one that can work hand in hand with encryption. Camouflaging your data—and where you store it—can go a long way to providing you with peace of mind. Plus, you’ll never stir up those snoops in the first place.

Get in the Cloud
When we talk about hiding files, we typically mean they’re still stored on your computer hard drive but are invisible to digital peeping-toms. However, these days, a quick way to keep files handy but not readily visible is to store them in the cloud. The files remain hidden (the files aren’t physically with you), though you can access them anytime and anywhere on any computer. The secret of keeping the data truly hidden is to erase your browser history after accessing them and sign out of your cloud storage accounts without saving passwords. In theory, no one will ever know you have files online to access.

This isn’t the same as synchronizing data with the cloud and other computers, like you would with Dropbox or SugarSync or the like. Those services don’t hide your files; in fact, they, arguably, put them in more locations for people to find. To help hide them, you could cheat a little by deactivating synching to your computer for a limited period—say, when you’re traveling—and then turn it back on to get your files back on the drive later. But that defeats the purpose of ongoing synch. (Some of the tips below can help obscure files even when synched, however.).

There are a few ways to skin the cloud-storage cat, but right now the best services for the average computer user are:

• Amazon Cloud Drive (3 stars): You can store any kind of file, and if it’s a music file (MP3 or AAC), you can play it back easily with the Amazon Cloud Player app. The allotment is 5GB for free and then $1 per GB per year (so about $20 for 25GB a year). However, you can jump up to 20GB for “free” for a year if you purchase one MP3-based album from Amazon. Note: This service lacks sharing, backup, and online editing.

• Google Docs (3.5 stars): It was once the place where you went to edit files created with the Docs apps. Now Google lets you save any kind of file, too. You get 1GB free, but you can upgrade to 20GB for $5 per year—a better price than Amazon Cloud Drive but not as nice as Windows SkyDrive—and scale all the way up to 16TB for a measly $4,096 per year. The extra storage you buy is also shared with Gmail and Picasa for messages and pictures, respectively. Sharing is a regular component, and Docs is all about online editing, but you’d have to convert your files to its format during the file transfer to edit them later.

• Windows Live SkyDrive: Free file storage of 25GB is hard to beat. If they’re Microsoft Office files, you can edit them online easily either by opening the files instantly in the online Office app equivalent or at office.live.com (if you use Internet Explorer as your browser). Sharing files with others is a built-in feature. MCITP Training

Remember, most cloud storage is meant to be a backup of your local files. My suggestion, however, is to move files to the cloud. Leaving them on your hard drive means they’re still visible. The three services mentioned above excel at back up, as well as acting as a cloud-based hard drive (albeit not as if they’re a drive letter in Windows Explorer), which will help keep your files on the down low.

Amazon Thinks Every Penny Counts

All those loose coins sitting around your house are music to Amazon’s ears. The popular e-commerce site announced on Tuesday a program that will allow users of Coinstar counting machines to cash in their extra change for Amazon.com gift certificates.

 

Best Microsoft MCTS Training – Microsoft MCITP Training at Certkingdom.com

 

Coinstar normally charges an 8.9 percent convenience fee for use of its machines. However, by selecting the option to receive an Amazon gift certificate, users would be able to bypass the charge.

While Coinstar is not making money through convenience fees, it is making money off of each gift certificate sold, as Amazon is selling them to the company at a discount.

Both companies feel that such a deal will help expand their reach. Amazon’s customer base is largely made up of those with credit cards, and Coinstar’s growth has been limited by reluctance by consumers to pay the fee. With the deal, Amazon could now target cash-carrying consumers and Coinstar may be able to broaden its user base.

The program will initially be made available on 3,500 machines, with up to 5,000 machines offering the service by the end of the year, according to Coinstar. Since some machines also accept bills, a user could also load up a gift certificate by inserting bills into the machine.

Amazon is not the first company to sell gift certificates through Coinstar. Popular coffee store chain Starbucks last year successfully ran a pilot program that worked much in the same way. The Starbucks program now is offered on most Coinstar machines, along with gift cards from Hollywood Video and Pier 1 Imports.