Friday, July 10, 2009

Brilliant Attacks

Brilliant attacks. (especially the laser one !)
- Monitor the powerline and detect and decode keystrokes from 15m away
- Monitor vibrations on desktop object via laser reflections and decode keystrokes.

The more secure we think our systems are, the more we must remind ourselves that there are styles of attacks that we cannot conceive.

If anybody tells me ‘this is complete secure, it is unhackable’, then this I know for sure about the speaker is that they are NOT very imaginative. They might not be thinking very hard about possible attacks. To me this makes products from vendors who make such claims less secure, they will be blind-sided by some hacker with more imagination.

Tuesday, July 07, 2009

What is Scalability?

In computer systems we usually mean "linear scalability" and describes a relation between two measures. It is typically a reference to capacity. Two common measures are CPU capacity and number of users. If an application supports 10 users with 1 CPU and 20 users with 2 CPUs and so on until the numbers get quite large, we would say that it is scalable. because "number of CPUs" = 0.1 * "number of users".

It is a bit more complicated than that because we often have other constraints such as "user response time remains constant" and memory use scales as well. My first example also ignored the possibility of a constant offset, which would represent some fixed overhead. perhaps one-half of a CPU is required even if there are no users. For those who remember their math, y=mx+b is the equation which describes a line.

In my first paragraph i mentioned a possible exception "until the numbers get quite large". How large is large? That depends on the situation the solution is in. We typically would consider sizes that are significantly larger than what we expect, but still within limits. These limits can vary by the nature of the solution. If a workload is driven by internet stock trades, our degree of growth we could expect is much more volatile than if it is the number of Canadian branch locations for a bank (which is pretty much already saturated).

It would be false to assume that the linear scale continues without bound. Linear scale without bound, would be a truly rare situation. At various points as workload grows you will run into "walls". They are called that because when you look a the graph of this situation your resource usage grows much faster than your workload - as if it hits a wall.

These wall situations usually occur because some other resource becomes saturated. For example, perhaps your database server "maxes" out. In that case adding more CPUs to your application server won't help. But, perhaps re-engineering the database server will remove that constraint and allow for further growth. Sometimes these walls are "hard"; that is re-engineering won't alleviate the constraint.

There are two sources of these hard walls: coding and architecture. Some may argue that "coding" is just a different type of engineering constraint. I won't argue that, but in my company we have an engineering department that specializes in server sizing and configuration, and development departments that do the coding. so we classify them as two distinct problem types. I have another reason for that differentiation as well. Engineering constraints can usually be quickly fixed with the addition of more resources (server, CPU, memory, etc) or reconfiguration/reallocation of existing resources (add more threads, connections, heap). Ccoding problems on the other hand usually take much longer to diagnose, recode the problem area, retest and redeploy. A trivial example of this would be the replacement of a linear search with a hash table lookup.

Architectural constraints are more fundamental design decisions which can not easily be altered. For example a design decision that requires an application to execute completely within a single server. This might be a simple design that performs well - as long as you can buy a larger server. whether this is a good decision or not depends greatly on how reasonable your assumptions about the potential for growth may be.

Is scalability always a good thing? Perhaps not. it depends on what you are measuring. I recently read a product evaluation that said (incorrectly) that the product's license model was not scalable. The truth is that it is high scalable. The more of the product we used, the more we paid. It wasn't linear though, because volume discounts meant that unit costs dropped as volume increased (and that is a good thing!). What the author really meant was that they wanted NON-scalable pricing; they wanted a price ceiling (somewhat like a wall except on the other dimension). At a certain point of volume growth, they didn't want to pay anymore. a desirable feature for the buyer, bit maybe not the seller.

There is much more that could be written, horizontal versus vertical scaling. 'knees in the curve', etc. But until then you might like to read the wikipedia article.