I am writing this post in view of challenging and convincing the authorized individuals about the improper unproductive deployment of Materials Engineers we produce in the country. If one could even look at the local industry given the above explanation you see the legitimate use of true knowledge on the field of materials engineering in determining accountability and preventing the catastrophic engineering disasters, the nation face. I could give you hundreds of examples, among them the study of catastrophic disasters of national air force aircrafts during the war period and after, the need of testing for propellers at power plant turbines, Need of testing for the armory and careful examinations and testing of sophisticated engineering structures could be highlighted at large. However if some students who need some inspiration out of the significance of the field of materials Engineering I thought the following video give some sense. I know what you see in the video is relatively old. But it is such a very interesting application of knowledge you grabbed as a Materials Engineer.
Translate
Saturday, August 4, 2012
Forensic Engineering – A probe into what the country should produce as Materials Engineers to compete global.
I am writing this post in view of challenging and convincing the authorized individuals about the improper unproductive deployment of Materials Engineers we produce in the country. If one could even look at the local industry given the above explanation you see the legitimate use of true knowledge on the field of materials engineering in determining accountability and preventing the catastrophic engineering disasters, the nation face. I could give you hundreds of examples, among them the study of catastrophic disasters of national air force aircrafts during the war period and after, the need of testing for propellers at power plant turbines, Need of testing for the armory and careful examinations and testing of sophisticated engineering structures could be highlighted at large. However if some students who need some inspiration out of the significance of the field of materials Engineering I thought the following video give some sense. I know what you see in the video is relatively old. But it is such a very interesting application of knowledge you grabbed as a Materials Engineer.
Wednesday, August 1, 2012
Awesome Paint Brush
Sunday, July 29, 2012
The 3G Kitchen platform: A no cost possible solution to better attract 3G signals to your home.
Tests proved that the orientation/establishment of the Dongle is so important for the success of your 3G kitchen Platform. Lol. In order to get better results.
1.Locate the central point of your Aluminum Plate/Pot and place your dongle there.
1. Tests Results
Issue is well observable as in the picture 1 : 3G signals were absent to my kitchen and to any other place belonging to our teritorial inheritance :D. But with the introduction of new Kitchen made 3G antenna platform; We observed convincing results. The results were very promising as the effective signal strength enhance exceeding the level of 29% so that we were able to observe two coloured dashes of the traditional 3G signal indicator (Picture 2).
1.1 Issue
1.3 Limitations
1. Remember this is an Antenna. So for the best performance you will be required to place the aparatus to the direction of the nearest tower from where you get 3G connection.
2. You know a typical antenna is not a signal station. Therefore when you put this in a Zero signal area, the antenna coud do nothing. So it is expected that at least signals are intermittently smelling to the site so that antenna could strength them. Generally evey signal has its own limitation, based on various factors, like weather condition, itnterferences bla bla bla. As they traverse through the atmosphere, it is inevitable the energy get lost. This is common for 3G as well. So you could do nothing with this aparatus when you get a dead signal.
2. Commercial Release of the Kitchen 3G antenna
1. A box of matches
2. A metal plate
3. Old TV antenna (optional).
4. Some glue
Special offer, you don’t have to pay me for Patents. Enjoy. :P
Wednesday, July 18, 2012
Effective Leadership is vital for success of offshore outsourced software project management
Discussion
Knowingly or unknowing, for many cases the expectations of a project is highly defined and dependent on the way they defined and directed by the project manager. This is one reason why I would suggest a project manager to be absolutely impartial and consistent to the maximum level he can, towards original objectives ahead of various conflicting stakes of different stakeholders. It is a common fact that a success of project is always about meeting the defined objectives during the given time provided the limited resources. However changes are inevitable. You will have to welcome them at any time at cost of various things. Provided this dynamic uncertain nature, the role of managing project is exceptionally challenging. A well compromised project will always be challenging and stressing the individual comfort zones of each stakeholders. That is another reason why a strong consistent and participative leadership is so important for success of any project to fight the frictions of evils of individual stakes. The objective should be to manipulate a unique homogeneous governance stature for every different stakeholder, making a healthy project environment with just and peace. Such culture will eliminate many evil forces which could hinder the project success.
Conclusion
Provided above domain specific realistic issues, one could conclude that a successful offshore project manager should spur his flux of leadership in every nook and corner throughout the project during the project life cycle. The landscape of successful project Leadership is not always about tight control. There, effective leadership will include the skills of empathy, effective influencing, guts to communicate the genuine picture to the demanding management, maintain trustworthiness between conflicting groups, maintaining and communicating a fair balanced just yardstick for governance, establishing correcting project cultures, bringing inspiration, vision and motivation while earning a set of mission driven followers.
Thursday, May 10, 2012
Jumping cursors - Freaking Dell Inspiron
Open the Control Panel, find "Mouse" , then the
"Pointer Options" tab, then UNCHECK the box next to "Enhance pointer
precision", then Apply, then OK.
You may be a computer geek, yet such distinguished absurd wearied extra features would exhaust you a lot. However it is also advisable that you touch the keys precisely.
-- Albert Einstein
Monday, February 27, 2012
Algorithm Complexity
Runtime complexity of an algorithm is always a huge confuse for me when I was at the college. Further, the math of variously termed Big O notation was hard to realize in practice. Trying to find a definition from Google, I would agree “Big O notation is used in Computer Science to describe the performance or complexity of an algorithm”. Generally Big O describes the worst-case scenario, and can further be used to describe the execution time required or the space used (e.g. in memory or on disk) by an algorithm. I found the best way to understand Big O thoroughly was to produce some examples in code, which in fact not my own original thought. For Big O, I generally would compare the complexity of algorithm comparative to the input. Big O notation denotes the limiting factor of an algorithm. It’s a simplified expression of how run time of an algorithm scales with relation to the input.
For example (in Java):
/** takes an array of strings and concatenates them
* @param strings the array of strings to concatenate
* @returns a string that is a result of the concatenation of all the strings
* in the array
*/
public static String badConcat(String[] Strings){
String totalString = "";
for(String s: strings){
for(int i = 0; i < s.length(); i++){
totalString+=s.charAt(i);
}
}
return totalString;
}
Now think about what this is actually doing. It is going to throw every character of input and adding them together. This seems straightforward. The problem is that String is immutable. So every time you add a letter onto the string you have to create a new String. To do this you have to copy the values from the old string into the new string and add the new character. This means You will be copying the first letter n times where n is the number of characters in the input, you will be copying the character n-1 times... so total there will be (n-1)*n/2 copies. This is (n^2-n)/2 and for big O notation we use only the highest magnitude factor(usually) and drop any constants that are multiplied times it and we end up with big O(n^2). Using something like a StringBuilder will be along the lines of O(nLog(n)). If you calculate the number of characters at the beginning and set the capacity of the StringBuilder you can get it to be O(n). So if we had 1000 characters of input, the first example would perform roughly a million operations. The simple StringBuilder would perform 10,000, and the StringBuilder with set capacity would perform 1000 operations to do the same thing. This is rough estimate, but O(n) notation is about orders of magnitudes, not exact run times.
O(1)
O(1) describes an algorithm that will always execute in the same time (or space) regardless of the size of the input data set.
bool checkFirstBlock(String[] strings)
{
if(strings[0] == null)
{
return true;
}
return false;
}
O(N)
O(N) describes an algorithm whose performance will grow linearly and in direct proportion to the size of the input data set. The example below also demonstrates how Big O favors the worst-case performance scenario; a matching string could be found during any iteration of the for loop and the function would return early, but Big O notation will always assume the upper limit where the algorithm will perform the maximum number of iterations.
bool ContainsValue(String[] strings, String value)
{
for(int i = 0; i < strings.Length; i++)
{
if(strings[i] == value)
{
return true;
}
}
return false;
}
O(N2)
O(N2) represents an algorithm whose performance is directly proportional to the square of the size of the input data set. This is common with algorithms that involve nested iterations over the data set. Deeper nested iterations will result in O(N3), O(N4) etc.
bool ContainsDuplicates(String[] strings)
{
for(int i = 0; i < strings.Length; i++)
{
for(int j = 0; j < strings.Length; j++)
{
if(i == j) // Don't compare with self
{
continue;
}
if(strings[i] == strings[j])
{
return true;
}
}
}
return false;
}
O(2N)
O(2N) denotes an algorithm whose growth will double with each additional element in the input data set. The execution time of an O(2N) function will quickly become very large.
Logarithms
Binary search is a technique used to search sorted data sets. It works by selecting the middle element of the data set, essentially the median, and compares it against a target value. If the values match it will return success. If the target value is higher than the value of the probe element it will take the upper half of the data set and perform the same operation against it. Likewise, if the target value is lower than the value of the probe element it will perform the operation against the lower half. It will continue to halve the data set with each iteration until the value has been found or until it can no longer split the data set.
This type of algorithm is described as O(log N). The iterative halving of data sets described in the binary search example produces a growth curve that peaks at the beginning and slowly flattens out as the size of the data sets increase e.g. an input data set containing 10 items takes one second to complete, a data set containing 100 items takes two seconds, and a data set containing 1000 items will take three seconds. Doubling the size of the input data set has little effect on its growth as after a single iteration of the algorithm the data set will be halved and therefore on a par with an input data set half the size. This makes algorithms like binary search extremely efficient when dealing with large data sets.
Oh, and do I use it?
Yes, all the time - when I'm figuring out how efficient my code is it gives a great 'back-of-the-envelope- approximation to the cost. Obviously, you may find 100 examples in internet for Big O. They may include polynominal, factorial, logarithm and other boring math. However many experienced developers I often use to consult confused at why they really need this information and could that be useful for them to save the day. Well, what the Big-O Is Good For? The good news is that the Big-O belongs to an entire family of notation. It also serves as a good indicator of what algorithm to use once you take your individual circumstances into consideration. I do use Big O notation on occasion to convey algorithmic complexity to fellow programmers. I use the underlying theory (e.g. Big O analysis techniques) all of the time when I think about what algorithms to use.
In a nutshell, the Big-O of a given algorithm combined with the specific problem knowledge is a great way to choose the best algorithm for your situation. I also agree that the Big-O lives in the land of theory and doesn't care very much about the real world.
So why is Big-O commonly associated with worst-case running times, and why is that imprecise?
It's because when considering the worst possible case, it is natural to give a limit on how bad that worst case can be, not how good it can me. That is, we want to give an upper bound on its degree of badness. Similarly, we often want to give a lower bound on how good the best-case is (i.e, even on good inputs, there's still a limit on how fast the algorithm can go; what is that limit?), so Big-Omega gets associated with best-case.
That's why Big-O gets associated with worst-case running times and Big-Omega with best-case. And it's true that if someone just says "the running time" is O(n^2), then n^2 is indeed "closer" to the worst-case running time than to the best-case running time, in the sense that n^2 is "bigger" than all possible running times, and the worst-case running time is "bigger" than the best-case running time. But O(n^2) doesn't mean that the worst-case running time actually is n^2, just that it is at most n^2.
Myths about Big O
You cannot use Big-O to compare the speed of two algorithms. Big-O only says how much slower an algorithm will get (approximately), if you double the number of items processed or how much faster it will get, if you cut the number in half.
However, if you have to entirely different algorithms and one (A) is O(n^2) and the other one (B) is O(log n), it is not said that A is slower than B. Actually with 100 items, A might be ten times faster than B. It only says that with 200 items, A will grow slower by the factor n^2 and B will grow slower by the factor log n. So if you benchmark both and you know how much time A takes to process 100 item and how much time B needs for the same 100 items, and A is faster than B, you can calculate at how many items B will overtake A in speed (as the speed of B decreases much slower than the one of A, it will overtake A sooner or later, this is for sure).
Math of Big O
'Big-O' notation is used to compare the growth rates of two functions of a variable (say n) as n gets very large. If function f grows much more quickly than function g we say that g = O(f) to imply that for large enough n, f will always be larger than g up to a scaling factor.
It turns out that this is a very useful idea in the analysis of algorithms, because we are often precisely concerned with the growth rates of functions which represent, for example, the time taken by two different algorithms. Very coarsely, we can determine that an algorithm with run-time t1(n) is more efficient than an algorithm with run-time t2(n) if t1 = O(t2) for large enough n which is typically the 'size' of the problem - like the length of the array or number of nodes in the graph or whatever.
This stipulation, that n gets large enough, allows us to pull a lot of useful tricks. Perhaps the most often used one is that you can simplify functions down to their fastest growing terms. For example n^2 + n = O(n^2) because as n gets large enough, the n^2 term gets so much larger than n that the n term is practically insignificant. So we can drop it from consideration.
However, it does mean that big-O notation is less useful for small n, because the slower growing terms that we've forgotten about are still significant enough to affect the run-time.
What we now have is a tool for comparing the costs of two different algorithms, and a short hand for saying that one is quicker or slower than the other. Big-O notation can be abused which is a shame as it is imprecise enough already! There are equivalent terms for saying that a function grows less quickly than another, and that two functions grow at the same rate.