Software Engineering Tip 0001:

When it looks like the solution to all your problems is to refactor, modularise add namespaces, rewrite existing code, step away from the keyboard.

Stack Ranking

Why stack-ranking is always a case of ‘the house always wins':

1. It’s partly a defendant’s argument, and I am biased towards my client(i.e the employee).
2.I’ve little experience managing a group of people and don’t claim to know all the challenges involved.
3.My research/reading has been restricted to supporting ideas/theories/assumptions only. Not the thorough, covering all other bases(and unbiased) kind of literature survey.(**wink** stack ranking vs performance/vitality curve distinction)

At first look it looks like a wonderful meritocratic setup. It uses relative comparison with peers(not unlike pagerank algorithm/ eigen morality. . On the face of it is a very brilliant idea or a good idea that works well, when measuring quantities that haven’t been quantified before enough. In fact, if I were trying to do science on measuring performance, it’s a reasonably sane and tried approach. However, there are problems with using it.
I understand why it makes it easier to make decisions(especially big organizations), you get a single number that’s guaranteed to fall within some expected values(in the probability theory sense) and forcing a curve simply makes it easier to fit a fixed amount for bonuses and incentives. However, here’s the challenge how do you know your employees’/managers’/directors’ performance falls into a bell curve*? Maybe your company’s hiring practices always get bad performing employees or average performing employees or high performing employees(all three in comparison to the general population)?. In which case, aren’t you alienating a high performing employee, because his peers did better(perhaps in revenue)? The catch is revenue has more factors influencing than the performance of your employees.

Here’s a quote from here. :

“You have to have an objective when you do stuff like this. At GE there was only one objective, and that was to force honesty. That’s all it ever was—to force an honest discussion between your manager and you. And there’s nothing that quite forces that more than employees knowing that they expect to know how that manager ranks them, and then asking that manager, ‘Tell me where I rank and tell me why.’”

See anything wrong in that argument? Try replacing ‘honesty’ with ‘dishonesty’ and the argument still is logically consistent and sounds right. Guess why, because there’s an underlying assumption, stack-ranking raises honesty(or honest communication). While I agree, it’s a good way to force managers to give feedback(especially negative) to their employees, am not convinced it’s good or encourages honesty. I get that people(and managers) are more likely to avoid giving negative feedback and they are also subject to confirmation bias. . All of which can create bloated inefficient departments/teams. Here’s the catch, when you force something like this you’re eventually pushing the lowest ranks to people who are bad negotiators(with their managers) and therefore don’t push back when given negative feedback. Over half a decade or so you get a whole company of employees, who are all very good negotiators(no correlation positive or negative with performance).
In the end that defense sounds way too much like someone (who’s a reformer) and is stuck in the values/virtues node aka holy priest(I know, I’ve been guilty of it so often and probably right now). Enough of debate-level arguments, here’s an attempt at discussion of why it becomes something bad.
In theory, it can encourage managers to be honest to give negative feedback to their hires/employees, but in practice, it comes down to compromises/favours/future promises traded between the employee and the manager. You’re forcing the manager to make compromise/favour/future promise to one employee to pay the other. Even then, if it is still one number and some subjective reasoning between manager and employee it has some hopes of being a measure**. Now that post doesn’t make it clear why it’s a bad idea to use a normal curve on measuring performance, but it’s basic necessity before we can talk about using/finalizing measures of a hitherto unquantified phenomenon. For that we need to understand where does this vitality curve concept comes from.
Here goes the google scholar search result showing up nothing.I’ve been trying to find what research went into the whole stack ranking idea. A google search shows up Vitality curve. Ok where could Jack Welch have picked up this insane idea of vitality curve? The closest I can find is

Central Limit Theorem in Statistics.

The basic premise of this theorem is that if we take enough number of samples of random variable of unknown distribution, the average of the samples will form a normal standard distribution.
This is not the strongest form of the theorem, but is the basic one the rest of the theorems are based on.
Now let’s look at what this means. When you’re examining a measurable quantity, whose distribution is unknown, you can essentially take samples(enough no. of times and enough size) and average it to form a normal distribution, if there’s enough samples and sample sizes.
Why/How is this useful?

Well it becomes useful when you want to compare two random variables and see if they have anything in correlation or common causal factors.

Especially, when you have figured out ways to manipulate/control one of the variables, we can simply design experiments that measure both of these variables, plot the difference of their averages(of the samples) and see how much it varies from the standard normal curve. This can give us whether they are positively or negatively correlated or simply unrelated. This is how experimental sciences work. Ofcourse, it’s not perfect, but it’s the best we have.**

Now, let’s get back to the original topic, if your organization/manager is implementing a stack ranking and if they refer to central limit theorem(you’re in luck, I haven’t heard any manager relate both of these, or name any of these.) you can question where does their idea of normality comes from. There’ll be cases, where your manager will tell you, your performance was average/below-average/above-average with respect to the rest of the team/organization’s. You get to question, how did they arrive at the normal curve’s values( most likely answer would be past year’s performance).

But here’s the catch, if they understand the experimentation process, the challenge then is to prove/question the current curve has seen enough samples. I don’t think it’s possible in most organizations/most roles. Of course, in very well established industries, with very specifically defined roles, it makes sense and is possible, but I’m not sure it applies well in the modern business environment.

Now the bigger your organization, more likely your performance is rated among different aspects/vectors/areas, which essentially multiplies the number of variables, and actually complicates the problem(requiring more samples to normalize).

What are the basic premises of the “Central Limit Theorem”?
Well, for one that you are comparing two distributions of random variables. (aka random distributions).

* — A quick read based on the blog here suggests not all companies use standard normal distribution, but normal/gaussian distribution with different spreads. facebook seems to have a narrower spread than amazon( which makes me think of the differences in corporate culture and what this model entails for it, but that’s more thinking and perhaps another blog post, about nash equilibrium competition vs co-operation.Hunch/Guess: more competition than co-operation at facebook and vice-versa at amazon.). It’s not clear what google uses.

** — Scientists, don’t get angry with this. I know there are more nuances that go into statistical inferences, but think this is core value/process, and can be explained simply. Besides, am not a real scientist, just a guy who went out of the academics.

P.S: To put a cynical quip (paraphrasing i think Douglas Adams), The universe is either mildly malevolent or neutral(i.e: definitely not benevolent), the modern workplace is definitely malevolent(either mildly or more).

Harmonic Mean

This is a followup post to geometric mean post.

What exactly is Harmonic mean ?
Well to summarize the wikipedia link, it is basically a way to average of rates of a some objects.

Continuing with the Laptop, example , let’s see how to compare the laptops in terms of best bang for the buck.

Once again, we have three attributes and we divide the attribute values by the cost of the laptop. Now this will give us (rather approximately) how much GB/Rupee* we get.

The we apply the formula for harmonic mean.: i.e: 3/(1/x1 + 1/x2 +1/x3).

Just for the fun of argumentation, I threw in a Raspberry Pi 2 + cost of 32 GB SD Card inside.
And Of course** the Raspberry Pi 2 comes out on top on the Harmonic mean(of most bang for the buck) ranking..

Note, how i divided the attributes by cost. In other words, I did that because harmonic mean doesn’t make sense to apply to values that are not rates. (aka, for the engineers, the units have to have a denominator.)

Also note that, the Raspberry Pi 2 is lower in both the arithmetic and geometric means of the attributes(CPU speed, Disk space, RAM), but higher when it comes to value per price. That’s one reason to use harmonic mean of rates (of price/time/) when comparing similar purchases, with multiple attributes/values to evaluate.

Now, so far these are all individual attributes, that don’t talk about or evaluate other factors.

Like for example the apple’s retina display technology. Or for that matter, CPU Cache, or AMD vs Intel processor, Or multithreading support, Or number of cores etc..

All of these could be weighted, if you do know how to weight them. And weighting them right would require some technical knowledge, and reading up reviews of products with those features on anandtech’s reviews/comparison blog posts.

* — If you look closely at the Excel sheet, I’d have multiplied the GHz by 1000, and get KHz to get the numbers in a decent level.

** — Of course, because, it doesn’t come with a monitor, keyboard or mouse. It is simply a PCB chip.

HTTP protocol.. RFC study notes

Alright, I sholud have done this atleast 2 years ago and was too much of an idiot to not do this, better late than never.

Study Notes — http protocol (RFC 7230 – 7235)*

RFC 7230 — Message syntax and Routing

Key parties:
1. HTTP Server: the sytem that responds to http requets with http responses
2. User Agent/http client: the system that sends the http requests

There are some intermediate parties in the communication between 1 and 2. (Because of how tcp/ip works).
Note: these are relevant because, some of the keywords are related to these. (aka, this is where the http vs tcp/ip abstraction leaks)
1. proxy:
message-forwarding agent selected by client(via configurable rules),
commonly used to group an organizations’ requests
2. gateway:
an intermediary that acts as origin(http) server for a outbound connection but translates the requests and forwards them inbound to other servers.
3. tunnel:
Tunnel is a blind relay between 2 connections, that passes on messages. it differs from gateway, but not translating the requests, but blindly passing them on. Generally used in situations like TLS + https secure communication via a firewall proxy

Details in RFC 7234.
1. Local store of previous response messages
2. A response may or may not be cached based on :
a, cacheable flag is set.
b, A set of constraints defined in rfc7234

A Message has atleast these fields:
Version is .

HTTP-version = HTTP-name “/” DIGIT “.” DIGIT
HTTP-name = %x48.54.54.50 ; “HTTP”, case-sensitive

Major version denotes http messaging syntax, while minor version is the client’s communication capabilities.
Hmm.. these two don’t seem well-defined so far in the rfc.
My guess is the major version corresponds to tell the server which protocol-specific syntax, (ie: http/https/ftp/etc.) to connect with the server is used for the request.
While minor version is which version client understands, so the response can be formatted in a compatible manner.
My guess about major num is wrong.

The intention of HTTP’s versioning design is that the major number
will only be incremented if an incompatible message syntax is
introduced, and that the minor number will only be incremented when
changes made to the protocol have the effect of adding to the message
semantics or implying additional capabilities of the sender.
However, the minor version was not incremented for the changes
introduced between [RFC2068] and [RFC2616], and this revision has
specifically avoided any such changes to the protocol.

Uniform Resource Identifiers:
1. identifies resources
For the URI syntax, I’ll just quote from the links on the rfc.

URI-reference =
absolute-URI =
relative-part =
scheme =
authority =
uri-host =
port = path-abempty = segment =
query =
fragment =

absolute-path = 1*( “/” segment )
partial-URI = relative-part [ “?” query ]

http URI Scheme:

* — Original RFC was 2616, but it was superseded by these.

What I would change about python?

1. The semantics of the ‘or’ keyword. I know it’s supposed to make it readable, as it currently exists(i.e: read boolean values of left side expression, and if false read right side of the expression and return whichever is true. False if both are false.). I’d rather have it return True or False instead. I think that’s more logical for a programmer, and perhaps that’s part of python being not a purely-functional language.

2. The distinction between expression and statement.

3. Side-Effects: While it’s possible to write code that provides functional interface, it(interpreter) does not guarantee no side-effects/assignments.

Why read fiction?

Why do I read fiction? Or what do I get out of reading fiction?
Vivek haldar here talks about how he doesn’t read fiction because it does nothing to him, or rather means nothing to him.
It set me thinking like a knot in my brain, or a thorn in the brain. I read it long time ago, and my first thought was am the opposite.
I prefer reading fiction. In the time since, I have held the question in my mind for some time and come up with the following possibilities:

0. Theory of mind– there’s some (scanty,debatable)evidence reading fiction helps understanding how other minds work.
Here’s the study
And I do have a tendency to retreat into reading fiction, when I am upset/confused or trying figure out what’s the right decision(usually regarding people in my life) to make.

1. I find it kinda enhances or clears my head to goad into logical thinking.* i.e: once am done reading through the fiction to completion.

2.It definitely affords a comfortable/guilt-free thing to do, without being(nay feeling) guilty of procrastination, supposedly reading is always considered a good thing(socially).

3.It could also simply be my way of dealing with the modern world’s craziness. Much like VGR refers here.

4. It helps as good practice for thought experiments and therefore makes it easier to consider alternative explanations**.

5. It definitely helps to clear out the emotional components from my decision-making/thinking. More specifically in the (alertness/arousal) scale, it helps lowering out arousal level, and therefore raising the alertness/arousal ratio. (One of my hypothesis is that rational thinking directly proportional to ratio of alertness to arousal levels).

*– Might simply be wishful thinking on my part.
P.S: The above is a rather descriptive attempt. Some of the points may and probably do have overlap with other points. The bullet point format is simply organized for communication, instead of empirical hypothesis testing.

Share: Harry Potter and the Methods of

Somehow Harry had understood that, even before anyone else had warned him he’d understood. Before he’d read about Vladimir Lenin or the history of the French Revolution, he’d known. It might have been his earliest science fiction books warning him about people with good intentions, or maybe Harry had just seen the logic for himself. Somehow he’d known from the very beginning, that if he stepped outside his ethics whenever there was a reason, the end result wouldn’t be good.

A final image came to him, then: Lily Potter standing in front of her baby’s crib and measuring the intervals between outcomes: the final outcome if she stayed and tried to curse her enemy (dead Lily, dead Harry), the final outcome if she walked away (live Lily, dead Harry), weighing the expected utilities, and making the only sensible choice.

She would’ve been Harry’s mother if she had.

“But human beings can’t live like that,” the boy’s lips whispered to the empty classroom. “Human beings can’t live like that.

measure theory and cog. psych.

Measure theory defines three attributes for some variable to be considered a measure*.

  • 1. Non-negativity:
    It’s the idea that a value should not go to negative when measured (by whatever means/equipment in the real world.
  • 2.Null empty set:
    It’s the idea that the measure becomes zero for a null set.
  • 3.Countable additivity:
    This one basically means if there are ‘n’ sets with measures ‘X1, X2, …, Xn’ then the measure of the Union of all the ‘n’ sets is less than or equal to sum of ‘X1, X2, … , Xn’

* — I think it can be extrapolated/extended to measure of any geometric properties, but not beyond that. Very tellingly, it is used widely in a field called real analysis. After all, in electrical engineering we have all sorts complex, negative, fractional numbers. I picked up these definitons from Fractal Geometry book rather than the wikipedia links provided.
ps: There’s a more generalized definition of measure, that might fit these too here.
** — If you think about these are all just a set of rules for determining if a given set belongs/satisfies the properties of a set of numbers, but that’s beside the point here.

As much as i have been a fan of cognitive psychology, so far, i am now beginnning to wonder, which and how many of these concepts like executive control,,etc.. have been shown to obey these laws. I haven’t done a thorough survey or research, but deeply suspect there hasn’t been any published attempts in these directions. I would like to see some, but i think it may not be easy to pick a property that’s easy enough to deal with.

Also, i begin to wonder how many of these apply or scale to organizational psychology? or committee-centered decision-making policies. Again i suspect there’s been very little attempts to scale/correlate cog,psych concepts into organizational/behavioural psychology, never mind cross-checking with relevant math area’s base assumptions.