Cloud computing can represent a net gain in data security and system reliability – especially for small businesses with aging computers and data stored on hard drives that rarely -- if ever -- experience a back up.
But that doesn’t mean you can take security and reliability for granted. Protecting your company in the cloud requires careful due diligence and planning. Start here with these 10 cloud computing security tips.
1. Identify and Assign Value to Assets
Assets could be include applications such as customer relationship management (CRM) or accounting; data, including private customer information; or infrastructure such as hosted servers and operating systems.
Ask yourself how valuable the assets that you’re considering moving to the cloud are to your organization, said CSA advisor Raj Samani, the London-based chief technology officer for security software vendor McAfee.
What would happen if you couldn’t access online software for an hour or a day, for example, or the provider lost your data or hackers stole sensitive information from the providers’ computers?
“Not all cloud providers are the same,” Samani noted. “If you assign a value to your assets, then it’s easier to decide what level of security you’re going to need.”
2. Assess Your Liabilities
One of the biggest cloud security concerns is the risk of breaches resulting in loss or theft of sensitive private data. If the information leaked is proprietary only to your company, liability is not a concern. But you need to know where responsibility lies if customer or patient information goes missing.
“If there’s a breach and data is lost, it’s not the cloud provider who is on the hook,” saed James Quin, lead analyst at Info-Tech Research Group Inc. “It’s the way all the regulatory bodies are coming down on this. You collected the data and chose how to store it. So you’re on the hook if something goes wrong.”
In other words, caveat emptor -- let the buyer beware. And in this case, you’re the buyer.
3. Research Compliance Requirements
In some industries -- banking and health care are examples -- government or industry regulations establish standards for how electronic data is handled, including stipulating the level of security in place. You may not even be permitted to use cloud services, or there may be restrictions, such as the data must be stored within the borders of your own country.
“The number and type of security controls in place may well be defined by regulation,” Samani said. “If you’re processing credit card transactions, for example, you may need to comply with PCI-DSS standards. Long before you engage with potential providers, you need to build a list of regulatory requirements for security.”
Even if nothing ever goes wrong security-wise, failing to comply with regulations can land you in hot water.
4. Determine Your Risk Tolerance
These initial steps all play into this admittedly somewhat nebulous, but pivotal, next step. How much are you willing to risk, how much can you afford to risk -- given the liabilities, the regulatory requirements, the importance of the assets to your organization?
“Based on the level of risk I’m willing to tolerate, do I, for example, have to look at a hybrid cloud solution,” Samani said referring to a cloud implementation that involves some data or program logic remaining on your business premises.
The other critical consideration is the cost of ensuring security, whether in the cloud or at your own offices. The more security controls you demand from cloud providers, the more expensive their services will be, Samani said.
“But if we could give any advice to small businesses, it would be to not necessarily accept the lowest-cost solution,” he added. “Cost is not the only thing [to consider].”
Using cloud infrastructure is the natural starting point for any new project because it’s one of the ideal use cases for cloud infrastructure – where you have unknown requirements; the other being where you need elasticity to run workloads for short periods at large scale, or handle traffic spikes. The problem comes months later when you know your baseline resource requirements.
Let’s consider a high throughput database as an example. Most web applications have a database storing customer information behind the scenes but whatever the project, requirements are very similar – you need a lot of memory and high performance disk I/O.
Evaluating pure cloud
Looking at the costs for a single instance illustrates the requirements. In the real world you would need multiple instances for redundancy and replication but will just work with a single instance for now:
Amazon EC2 c3.4xlarge (we can’t consider m2.2xlarge because it is not SSD backed)
= 30GB RAM, 320GB SSD storage
= $1.20/hr or $3726 + $0.298/hr heavy utilization reserved
Rackspace Cloud 30GB Performance
= 30GB RAM, 300GB SSD storage
Databases also tend to exist for a long time and so don’t generally fit into the elastic model. This means you can’t take advantage of the hourly or minute based pricing that makes cloud infrastructure cheap in short bursts.
So extend those costs on an annual basis:
Amazon EC2 c3.4xlarge heavy utilization reserved
= $3,726 + ($0.298 * 24 * 365)
Rackspace Cloud 30GB Performance
= $1.36 * 24 * 365
Another issue with databases is they tend not to behave nicely if you’re contending for I/O on a busy host so both Rackspace andAmazon let you pay for dedicated instances — on Amazon this has a separate fee structure and on Rackspace you effectively have to get their largest instance type. Calculating those costs out for our annual database instance would look like this:
Amazon EC2 c3.4xlarge dedicated heavy utilization reserved
= $4099 + ($0.328 + $2.00) * 24 * 365
Rackspace Cloud 120GB Performance
= $5.44 * 24 * 365
(The extra $2 per hour on EC2 is charged once per region)
Note that because we have to go for the largest Rackspace instance, the comparison isn’t direct — you’re paying Rackspace for 120GB RAM and x4 300GB SSDs. On one hand this isn’t a fair comparison because the specs are entirely different but on the other hand, Rackspace doesn’t have the flexibility to give you a dedicated 30GB instance.
Consider the dedicated hardware option…
Given the annual cost of these instances, the next logical step is to consider dedicated hardware where you rent the resources and the provider is responsible for upkeep. At my company, Server Density, we use Softlayer, now owned by IBM, and have dedicated hardware for our database nodes. IBM is becoming very competitive with Amazon and Rackspace so let’s add a similarly spec’d dedicated server from SoftLayer, at list prices:
To match a similar spec we can choose the Dual Processor Hex Core Xeon 2620 – 2.0Ghz Sandy Bridge with 32GB RAM, 32GB system disk and 400GB secondary disk. This costs $789/month or $9,468/year. This is 80 percent cheaper than Rackspace and 61 percent cheaper than Amazon before you add data transfer costs – SoftLayer includes 5,000GB of data transfer per month which would cost $600/month on both Amazon and Rackspace, a saving of $7200/yearly.
… or buy your own
There is another step you can take as you continue to grow — purchasing your own hardware and renting data center space i.e. colocation. We’ll look into the tradeoffs on that scenario in a post to come.
Fresh on the heels of AWS re:Invent and major announcements by AWS, Rackspace, and others, many companies may think of revisiting their cloud platforms for 2014.
Here are some tips to help web companies evaluate their options. In this post, we look at key services and features available from Amazon Web Services, Rackspace, and Google Cloud. We selected these three vendors because they are the most frequently-considered among Stackdriver customers based on a recent survey. In a second post, we’ll look at price-to-performance and intangibles such as support, traction, and community.
We are looking at the services from the perspective of a SaaS company because we tend to look for different things than large enterprises or…say…your local dentist. We build distributed systems, follow devops principles, and value cloud providers that care about our use case.
First, let’s consider where each of the major vendors stand in terms of the key building blocks for modern web applications. We define these services as “key” based on their broad use within distributed SaaS applications. We will only consider the services that are fully managed by the vendor and integrated into the cloud platform (vs. part of separate services, delivered by partners, or configured and managed by customers on cloud compute instances).
The gap is clear. It is even larger when you consider that AWS has invested in many of the core building blocks for years at this point whereas several Google and Rackspace services are still “V1”. For example, when EBS was released it had basic snapshot capabilities — over the past three years we’ve seen reliability improvements, provisioned IOPS, and cross-region snapshots added. We should see Google close the gap on core services in 2014, which will put it on the shortlist for many companies. On the other hand, AWS will continue to promote additional services that are critical for certain use cases, such as MapReduce, data warehousing, search, transcoding, streaming, and more.
Elasticity & Dynamic Environments
The promise of elasticity brought many of us to the cloud in the first place, and we architect our applications to scale out in order to satisfy demand. All of the major vendors have one key building block in this area — load balancing services. These services make it easy for us to manually add more nodes to clusters without taking scheduled downtime, but autoscaling takes this to another level. Rackspace recently announced autoscaling; Google claims to support autoscaling but it is not integrated with Compute Engine and thesetup is fairly complex. AWS has supported autoscaling for years and has extended it to support spot instances and offers several advanced orchestration services that make it easier to deploy and run dynamic applications in the cloud, including CloudFormation, OpsWorks, and Elastic Beanstalk.
Reliability & High Availability
Of course reliability for a SaaS application starts with the reliability of its underlying infrastructure, and much attention has been paid to outages on AWS–particularly in 2011 and 2012. Part of the challenges here is that Amazon simply gets more press due to its popularity–Google Cloud was still in beta during this time and Rackspace has a fraction of the AWS installed base. The reality is that outages happen everywhere, and so we tend to favor vendors who have been battle-tested and are transparent about lessons learned.
Perhaps more importantly, with SaaS applications, we want cloud providers to make it easy for us to architect for failure. We use stateless systems, distribute applications across zones within a region, and use multiple regions–either actively or in a failover configuration. Here’s how the vendors stack up in terms of supporting reliability and HA:
With nine regions and twenty-five availability zones to choose from, AWS makes it easy to distribute work across physical locations. The new cross-region replication support for RDS and Dynamo is also a major step forward. On the other hand, Google’s work here should not be overlooked; starting with automatic replication is a real advantage and it matters that they have demonstrated a commitment to working across regions from the start.
AWS tops the list, but don’t count Google out
If you’re out shopping for a new cloud provider for your web application based on platform features and capabilities alone, AWS should be at the top of your shopping list today. This will not surprise anyone who has been watching the space and observing the unprecedented speed at which AWS has been introducing new capabilities. We do, however, see promising signs in Google’s decision to embrace cross-region support, so they will be a platform to watch in 2014. In any event, before you break out that credit card, you should check back for the next post where we consider price-to-performance, community and support.
When CenturyLink bought cloud startup Tier3 for $200 millionearlier this month, it acquired a technology platform and a group of people that should let the telecommunications giant take full advantage of its dozens of data centers and expansive fiber network. The man heading up that effort is Jared Wray, the founder and CTO of Tier3 who’s now the CTO for cloud computing at CenturyLink, and he came on the Structure Show podcast this week to share his thoughts on evolving from being a small cloud provider into, potentially, a very large one.
Here are some highlights of our interview with Wray, centered around his thoughts on what it will take for other cloud providers to finally emerge from the shadows of Amazon Web Services, Microsoft and Google. If you want to hear about CenturyLink’s plan for building a single cloud platform out of its Tier3, AppFog and Savvis acquisitions, or about the intricacies of selling cloud computing to enterprises, you’ll want to listen to the whole show.
1. Get out of Florida
“Seattle has become the mecca of cloud,” Wray said, noting the presence of itself (and now CenturyLink), AWS, Microsoft, Google and others. “…That’s a lot of engineering talent that understands how to build cloud, how to make it happen, what distributed computing is really about — especially for cloud resources.”
Wray thinks telcos trying to build legitimate cloud computing businesses from their traditional corporate hubs are going to have a tough time, particularly around finding the talent they need.
2. Size matters
“When we talked to customers before this acquisition, it was, ‘We love everything about you guys.’ … But then they’d say, ‘Wow, you’re really small,’” Wray acknowledged.
Now, his company of about 60 employees running out of nine data centers is part of a company with thousands of employees and 55 data centers around the world. Wray said Tier3 had been able to secure some decent partnership deals with large data center operators and value-added resellers, but even those were hard-fought at times because “big companies want to do business with big companies.”
3. Find a niche and own it
“I actually think you can compete [as an independent IaaS provider], but you’re going to have to isolate yourself and find an industry or a channel that you own,” Wray said. “And you have to completely own it.”
He was referring to the spate of consolidation and large-vendor entries into the infrastructure-as-a-service space, which now has very few privately held providers (GoGrid and Virtustream are the only two among Gartner’s 15 Magic Quadrant providers, for example). He pointed to Virtustream’s focus on running SAP applications as a good example, and also noted the possibilities in spaces such as health care and government workloads.
4. VMware is a competitor and a great partner
“In the end, this market’s growing fast, so there’s plenty of pie for everybody,” Wray said.
Yes, Tier3 was (and CenturyLink is) a VMware partner, but they’re also competitors with all of VMware’s other partners and even VMware itself as it gets deeper into the cloud space. The trick to navigating this potentially tricky situation might be to find value beyond just offering up on-demand VMware instances.
“Really, the hypervisor is becoming irrelevant, if you think about it,” Wray said. “Nobody, in the end, cares. They care about it when they’re private because they have to manage it and use it, but when it comes to cloud they just know that we’re gonna give them an SLA and we’re gonna stick to that.”
5. What can platform-as-a-service do for you?
“There’s a lot of things that the framework of Cloud Foundry does that’s above what Docker does,” Wray explained. “I think Docker is an amazing suite of technologies — we’re already working on some things to use for Docker — but you have to kind of think about what tool you’re gonna use in your toolbelt and when.”
This is more than just an endorsement of two different PaaS projects — it’s also a testament to how much thought Tier3 had been putting into being more than an IaaS provider. Tier3 actuallybuilt a .NET-centric take on Cloud Foundry called Iron Foundry a couple years ago, and now it’s looking to Docker (as well as the AppFog PaaS technology CenturyLink acquired in June) to help expand that vision.
Logistik Group, the award-winning communications, experiential and brand activation agency, have strengthened their Communications Consultancy team with organisational and social psychology communications specialist Daphna Salomon joining them in their London office.
Daphna joins Logistik from Kenexa, an IBM company, where she was the lead consultant working with leadership teams of complex, multi-functional and geographically diverse organisations. Her client list included Volvo Group, Lloyds Banking Group, SEB Group, Comptel and ECDC. With a London School of Economics and Political Science MSc, Daphna specialises in organisational and social psychology, with a particular focus on experiential events and cross-cultural communication.
Logistik Group’s Head of Consultancy, Matthew Ede, commented, “The appointment of Daphna to the Consultancy team has added strength and breadth to Logistik’s rapidly growing in-house team. Our clients are increasingly looking to bring a higher level of people understanding and insight into their communication and engagement programmes. That’s exactly the right approach – it’s what will make or break success and determine whether the programmes become embedded in the business or not.
“Daphna’s specialised skill-set in organisational and social psychology is the perfect complement to our Consultancy team, providing our clients with unique expert advice and strategic delivery, developing programmes and experiences that engage people and help them to achieve their goals,” said Ede.
Daphna Salomon has already begun work on a number of projects across the company’s portfolio of clients and said, “I’m excited about having joined Logistik Group as a Communications and Engagement strategist. I was drawn to them because of their reputation as one of the UK’s best communications agencies and the calibre and quality of clients they partner with. ”
2013 has been a year of growth for the Logistik Group, with several new business wins across their core sectors including digital, production and consultancy.
An autocrat is a ruler of unlimited power and authority. An autocratic management technique is used by a manager who likes to make all of the decisions and have absolute control over her employees. She likes to give orders swiftly and have them carried out swiftly. Her technique is to control workers to achieve maximum efficiency and productivity. She is not interested in listening to employee feedback. An autocratic management technique can rub a lot of employees the wrong way, but it can also be effective in a large company with unskilled workers, or when a business is in crisis and decisions need to be made fast.
A paternalistic management technique is most concerned with the social element of the business. He is concerned about how his employees feel about work and other issues and will always consider their views when making decisions. He has no trouble making final decisions, but will consult with his employees regularly and do what is best for everyone. A paternalistic management technique can slow down overall decision making, and may not be the best style for a fast-paced environment.
A manager who uses a democratic management technique tries to instill trust in her employees. She wants them to make decisions and to empower them by giving them some authority. She will show good communication skills and listen to advice and ideas from her employees. A democratic management style can result in slower decision making and more mistakes because the employees trusted with making decisions aren't always skilled enough to do it right.
A superior manager will recognize that every employee is different and that one specific management technique will not be successful for everyone. He will develop a hybrid of all three management techniques that takes into consideration each employee's individual learning style and personality. This type of management technique can be challenging when working with larger groups of employees, but with patience, will ultimately get the most out of each one.
An effective manager pays attention to many facets of management, leadership and learning within organizations. So, it's difficult to take the topic of "management success" and say that the following ten items are the most important for success. I will, however, suggest seven management skills without which I don't believe you can be a successful manager.
The most important issue in management success is being a person that others want to follow. Every action you take during your career in an organization helps determine whether people will one day want to follow you.
A successful manager, one whom others want to follow:
- Builds effective and responsive interpersonal relationships. Reporting staff members, colleagues and executives respect his or her ability to demonstrate caring, collaboration, respect, trust and attentiveness.
- Communicates effectively in person, print and email. Listening and two-way feedback characterize his or her interaction with others.
- Builds the team and enables other staff to collaborate more effectively with each other. People feel they have become more - more effective, more creative, more productive - in the presence of a team builder.
- Understands the financial aspects of the business and sets goals and measures and documents staff progress and success.
- Knows how to create an environment in which people experience positive morale and recognition and employees are motivated to work hard for the success of the business.
- Leads by example and provides recognition when others do the same.
- Helps people grow and develop their skills and capabilities through education and on-the-job learning.
Chances are good that, at some time in your life, you've taken a time management class, read about it in books, and tried to use an electronic or paper-based day planner to organize, prioritize and schedule your day. "Why, with this knowledge and these gadgets," you may ask, "do I still feel like I can't get everything done I need to?"
The answer is simple. Everything you ever learned about managing time is a complete waste of time because it doesn't work.
Before you can even begin to manage time, you must learn what time is. A dictionary defines time as "the point or period at which things occur." Put simply, time is when stuff happens.
There are two types of time: clock time and real time. In clock time, there are 60 seconds in a minute, 60 minutes in an hour, 24 hours in a day and 365 days in a year. All time passes equally. When someone turns 50, they are exactly 50 years old, no more or no less.
In real time, all time is relative. Time flies or drags depending on what you're doing. Two hours at the department of motor vehicles can feel like 12 years. And yet our 12-year-old children seem to have grown up in only two hours.
Which time describes the world in which you really live, real time or clock time?
The reason time management gadgets and systems don't work is that these systems are designed to manage clock time. Clock time is irrelevant. You don't live in or even have access to clock time. You live in real time, a world in which all time flies when you are having fun or drags when you are doing your taxes.
The good news is that real time is mental. It exists between your ears. You create it. Anything you create, you can manage. It's time to remove any self-sabotage or self-limitation you have around "not having enough time," or today not being "the right time" to start a business or manage your current business properly.
There are only three ways to spend time: thoughts, conversations and actions. Regardless of the type of business you own, your work will be composed of those three items.
As an entrepreneur, you may be frequently interrupted or pulled in different directions. While you cannot eliminate interruptions, you do get a say on how much time you will spend on them and how much time you will spend on the thoughts, conversations and actions that will lead you to success.
Related: Tips for a More Productive Day
Practice the following techniques to become the master of your own time:
- Carry a schedule and record all your thoughts, conversations and activities for a week. This will help you understand how much you can get done during the course of a day and where your precious moments are going. You'll see how much time is actually spent producing results and how much time is wasted on unproductive thoughts, conversations and actions.
- Any activity or conversation that's important to your success should have a time assigned to it. To-do lists get longer and longer to the point where they're unworkable. Appointment books work. Schedule appointments with yourself and create time blocks for high-priority thoughts, conversations, and actions. Schedule when they will begin and end. Have the discipline to keep these appointments.
- Plan to spend at least 50 percent of your time engaged in the thoughts, activities and conversations that produce most of your results.
- Schedule time for interruptions. Plan time to be pulled away from what you're doing. Take, for instance, the concept of having "office hours." Isn't "office hours" another way of saying "planned interruptions?"
- Take the first 30 minutes of every day to plan your day. Don't start your day until you complete your time plan. The most important time of your day is the time you schedule to schedule time.
- Take five minutes before every call and task to decide what result you want to attain. This will help you know what success looks like before you start. And it will also slow time down. Take five minutes after each call and activity to determine whether your desired result was achieved. If not, what was missing? How do you put what's missing in your next call or activity?
- Put up a "Do not disturb" sign when you absolutely have to get work done.
- Practice not answering the phone just because it's ringing and e-mails just because they show up. Disconnect instant messaging. Don't instantly give people your attention unless it's absolutely crucial in your business to offer an immediate human response. Instead, schedule a time to answer email and return phone calls.
- Block out other distractions like Facebook and other forms of social media unless you use these tools to generate business.
- Remember that it's impossible to get everything done. Also remember that odds are good that 20 percent of your thoughts, conversations and activities produce 80 percent of your results.
IBM (IBM) is having an identity crisis, and it sure is something to watch.
You may have heard about Big Blue’s recent ad campaign that takes a dig atAmazon.com (AMZN). In its marketing material, IBM claims to power 270,000 more websites than Amazon, via its cloud computing service. It’s a flimsy jab at Amazon because IBM has been a major laggard in the cloud rental market, having bought its way into the business in July with its acquisition of SoftLayer Technologies.
Far from being a cloud pioneer, IBM has spent most of the past few years downplaying services such as Amazon’s as insecure, low-margin businesses of little interest to a serious computing company. “You can’t just take a credit card and swipe it and be on our cloud,” IBM executive Ric Telford told me in early 2011. The company’s pitch to customers was that it knew them intimately and its cloud system was safer. But thousands of startups, including Dropbox and Netflix (NFLX), were more than happy to swipe their credit cards and get going on Amazon.
IBM’s reluctance to enter the credit card-swiping end of the cloud business was in keeping with its shift away from low-margin disk drives, PCs, and networking gear toward higher-profit software and services. The company wanted to sell cloud services to large corporate users willing to pay a premium for some hand-holding, not retreat into by-the-hour computer rentals. Unfortunately for IBM, the market and equipment have matured so much that fewer and fewer customers need much hand-holding these days. Even the most arcane data-center equipment is getting easier and easier to use.
Let’s be clear: There’s plenty of work left for an IBM to do. The healthcare.gov debacle shows just how awful some technology projects can still get. But IBM’s revenue will keep falling with this strategy in place, as other companies turn to it less and less.
The acquisition of SoftLayer shows that IBM knows it needs to engage in some hand-to-hand industry combat if it wants to remain relevant. Having sold its disk drive business to Asia, IBM is now renting disk drives by the hour for pennies. If you want to be a technology company in 2013, that’s the sort of thing you must do
OK, the word “loathing” overstates the case, but Amazon’s prodigious public cloud does inspire fear even among some of the company’s best partners.
With that one sentence Jassy put Citrix Systems (CTXS), VMware (VMW), andMicrosoft (MSFT)—all desktop virtualization players—on notice. What was unclear to many is why, with adoption of desktop virtualization still lagging despite all these available options, should AWS get involved?
Amazon’s response to that question was, as it almost always is: customer demand. ”Our most frequent request from large customers has been for a desktop solution,” Adam Selipsky, vice president for marketing, sales, product management, and support, told me. “There’s a big pain point around desktop management—a lot of cost around software, hardware, and administration, and that’s only gotten worse with the proliferation of new devices,” he said.
By offloading all that hardware/software/admin to Amazon’s cloud, IT folks could, in theory, rid themselves of a huge headache. But the aforementioned desktop virtualization players—as well as flash storage startups that cite desktop virtualization as a key driver to adoption—may feel that pain as well if WorkSpaces takes off.
The proliferation of AWS services doesn’t ding just entrenched IT giants—but also the hundreds of small ISV and service partners that have grown up around AWS itself.
Smaller players—many of which offer add-on monitoring, cost assessment, and other tools that fill gaps in the AWS stack, publicly praise the company’s ability to churn out new services continually and cut prices. Privately, they’re sweating out concerns that if their business does well enough, AWS will swoop into their market and take it over, as always citing customer demand.
Redmonk analyst Stephen O’Grady has a great take here on the breadth of Amazon’s ambitions on display at AWS Re:Invent. And, he sees the same similarities I do between AWS and Microsoft of a decade or so ago, when Microsoft “owned” the desktop and a good chunk of the server OS market.
In this respect AWS reminds many—including me—of Microsoft in its heyday—when it owned more than 90 percent the computer desktop and a huge chunk of the server OS market.
“AWS is effectively the juggernaut that Microsoft was, but in a market with—at least theoretically—less protection from lock-in,” O’Grady said via e-mail. “So it follows that they’ll be more aggressive from an innovation standpoint than Microsoft was because they’ll have to be. Hence the 200+ releases per year. That’s how they’ll hope to sustain the momentum.”
Joseph Coyle, North America chief technology officer for Cap Gemini (CAP:FP)—an AWS partner—sees similarities to Microsoft on the surface but also sees one big difference.
“Say what you want about [Amazon CEO] Jeff Bezos but his vision is not at all the same as was Bill Gates. Bezos wins not by squashing as MS did but by not even focusing on the competition and just on the client,” he noted by e-mail.
Of course, skeptics might argue that the whole “we just do what the customer asks” can be a clever way to mask ambitions to crush competitors, but either way, the result is the same.
While many see AWS right now as invincible, it’s helpful to remember that few companies sustain such dominance from era to era. And Amazon’s continued addition of new services carries with it its own risk.