Pages

Wednesday, April 16, 2014

Refactoring (Part 2)

How do we Refactor?

Refactoring can be triggered by many factors such as deeper understandings, changes in requirements and so on. But never try to break into large quantities of code and refactor. We’ll end up in a much worse position than we are.

Refactoring has to be done slowly, deliberately and carefully. Below are some guidelines for refactoring given by Martin Fowler:

  • Don’t try to add functionality and refactor at the same time.
  • Have good tests before refactoring. It’ll help us to detect if anything breaks.
  • Take short steps.

While refactoring, fix the code and everything depends on the code. This will a pain. But it’s going to hurt more later.


- summary of Refactoring, from The Pragmatic Programmer: from Journeyman to Master

Tuesday, April 15, 2014

Refactoring (Part 1)

As the program develops, we will have to rethink earlier decisions and rework certain portions of the code. Code is not static. It needs to evolve.

Rewriting, reworking, and re-architecting code is collectively known as refactoring.

Refactoring.. When should we do that?

Following are some scenarios where we need to refactor our code:

  • Duplication - when we detect a violation of DRY principle.
  • Non-orthogonal design - when we discover that the design can be much more orthogonal.
  • Outdated knowledge - when the knowledge about the problem domain increases.
  • Performance - when we need a much better performance than existing.

Refactoring is not always easy. We have to go through the existing code and modify it without affecting the functionality. Many developers are reluctant to do this because their code is mostly fragile.

Time is another reason for not refactoring. But the reality is that, if we fail to refactor it now, we might have to spend much more time later for fixing bigger problems.

Refactor Early, Refactor Often


Saturday, April 05, 2014

Algorithm Speed (Part 3)

Algorithm Speed in Practice

We may not be dealing with sorting or searching algorithms more often in the real life. But there are situations where we need to think of estimation. While encountering a single loop, it’s easy to identify now that we are dealing with a O(n) algorithm. It is O(n×m) if it has a nested loop.

Estimate the Order of Your Algorithms

If we have an algorithm of order O(n2), we can try to bring it down to O(nlog(n)). If we don’t know how long it takes, the easiest way is to test with different set of inputs and plot a graph. With around 3 - 4 points in the graph, we’ll be able to estimate the order of the algorithm.

But this is not always the case. A simple O(n2) algorithm works better than O(nlogn) for smaller values of n. At the end of the day, what really matters is how long our code takes to execute with real data in the production environment. So, always

Test Your Estimates

In some other cases, the fastest is not always the best to do the job. We have to make sure that the algorithm is apt for our problem, before going any further.


- summary of Algorithm Speed, from The Pragmatic Programmer: from Journeyman to Master

Monday, March 31, 2014

Algorithm Speed (Part 2)

The Big O Notation

The O() notation is a mathematical way of dealing with approximations. We say that the worst case time complexity of an algorithm is O(n2). It means, for n records, the time taken for the algorithm to run is in the order of square of n. We consider only higher orders while estimating time complexity. For example:

O(n2 + 2n) = O(n2)

The higher order dominates other values while n varies. Since we are eliminating lower order terms, one O(n2) algorithm can be much faster than another O(n2) algorithm.


- summary of Algorithm Speed, from The Pragmatic Programmer: from Journeyman to Master

Saturday, March 29, 2014

Algorithm Speed (Part 1)

We estimate the time taken to complete the project, or the time taken to complete a particular task. There is another kind of estimation: estimating the resources used by an algorithm. This includes time taken to complete the algorithm, processor and memory consumption etc.

This kind of estimation is always important. Resource estimation is used to know how long the program takes to run with a particular set inputs. This also helps us to understand how the program scales for large number of records, thereby letting us know which all parts of the code need optimization.

How do we estimate algorithms? 

That is where we get the help of big O notations.

Estimating Algorithms… What does that mean?

Algorithms work with variable inputs: sorting takes an n element array, matrix operations require an n×m matrix etc. The size of the input affects the running time and amount of memory it takes.

But why do we need to estimate algorithm speed? Because the rate at which the execution speed increases is not always linear. An algorithm which takes one minute to process 10 records may take a lifetime to process 1000!

Big O notations allow us to perform a more detailed analysis.


- summary of Algorithm Speed, from The Pragmatic Programmer: from Journeyman to Master

Monday, March 17, 2014

Programming by Coincidence (Part 2)

There is only one way to avoid all these accidents: Always program deliberately.

How to Program Deliberately

  • Always be aware of what you are doing. Never let things go out of your hand.
  • Don’t code blindfolded. Chances of coincidence are high when we try to build an application without fully understanding it.
  • Always proceed from a plan. It doesn’t matter whether the plan is in paper or in your mind.
  • Rely only on reliable things. Do not depend on accidents or assumptions.
  • Document your assumptions. This helps to recollect and validate the assumptions at a later point of time.
  • Don't just test your code, but test your assumptions as well. Write assertions to test your assumptions. Never guess anything; actually try it.
  • Prioritize your effort. Spend time on important aspects.
  • Don’t be a slave to history. Always be ready to refactor. Never let what you have already done constrain what you do next.



- summary of Programming by Coincidence, from The Pragmatic Programmer: from Journeyman to Master

Saturday, March 15, 2014

Programming by Coincidence (Part 1)

Programming is like working in a minefield! An explosion can happen at any moment. Never take chances. Always be careful.

Programming by coincidence is relying on luck and accidental successes.

But, how do we program by coincidence?

Image that we are working on a programming project. We add some code, run it. It seems to be working. Then we add more code. Still it’s working. Suddenly, after several days/weeks, the program stops working. We spend hours to see what went wrong. But we couldn’t figure anything.

Why?  Because, we didn't know why it worked in the first place!

Sometimes, we rely on coincidences. Here are some examples:

Accidents of Implementation

This happens because of the way the code is currently being written. Suppose we call a routing with some data. The routine responds in a particular way. But the author didn’t indent for the routine to work that way. It can happen either due to code, or due to the routine itself. When the routine is fixed, our code might break.

Accidents of Context

We sometimes rely on the context in which we are currently working. We tend to think in such a way that this is for that particular context only. Accidents of context happens while relying on a thing that isn't guaranteed

Implicit Assumptions

We assume things. Assumptions are always poorly documented. It may vary from developer to developer.

How can we avoid these problems?

Simple...

Don’t program by coincidence


- summary of Programming by Coincidence, from The Pragmatic Programmer: from Journeyman to Master