Recent Posts

Wednesday, March 16, 2016

On 11:36:00 AM by your education in    No comments
algorithm is a well defined set of rules, a recipe, in effect, for solving some kind of computational problem. So for example, maybe you're given a bunch of numbers and you want to rearrange them into sorted order. Maybe you're given a road network with an origin and a destination, and you want to compute the shortest path from point A to point B. Maybe you're given a bunch of tasks with deadlines, and you want to know whether or not it's feasible to accomplish all of those tasks by the respective deadlines. So, why study algorithms? Well first of all, understanding the field of algorithms and also the related field of data structures, is crucial for doing serious work in pretty much any other branch of computer science. examples here, and let me just state one super obvious one, which is that search engines use a tapestry of algorithms to efficiently compute the relevance of various webpages. The most famous such algorithm which you may have heard of is the PageRank algorithm, in use by Google, indeed. In a December 2010 report to the United States White House, the President Council of Advisors on Science and Technology argued that, "in many areas, performance gains due to improvements in algorithms have vastly exceeded even the dramatic performance gains due to increased processor speed", as you'd be familiar with in the form of Moore's Law. Third, although this is getting significantly outside the scope of this course, algorithms are increasingly being used to provide a novel lens on processes outside of computer science and technology. For example, the study of quantum computation has provided a new and computational view point on quantum mechanics. Price fluctuations in economic markets can be fruitfully viewed as an algorithmic process. And even evolution can be usefully thought of as a surprisingly effective search algorithm. to both of them. Now, I don't know about you, but back when I was a student, my avorite classes were always challenging classes. But, after I struggled through them I somehow felt like I had a few more IQ points than when I started. So, I hope his course provides a similar experience for many of you: that, on the one hand, it's a bit of a struggle -- you find the concepts challenging, but perhaps you feel just a, a tinge smarter after we're done. it is addictive. So let's now descend from these lofty generalities and get much more concrete. And also, let's remember that we've all been learning and using algorithms since we were little kids. So once upon a time, in roughly 3rd grade or so, you learned how to multiple two numbers The last two reasons I'm gonna give you might sound a little flippant, but, I think there's more than a grain of truth now you probably weren't thinking in these terms at the time, but multiplying two numbers is certainly a well defined computational problem, and that procedure you learned back in 3rd grade or so is indeed an algorithm. So let's just make that a little bit more precise. In this computational problem we're given as input two numbers, let's say, with N digits. And to make things interesting, why don't you think about n as being really quite large, say, in the thousands. Maybe we're implementing an algorithm that's going to be used in a cryptography application, where you need to keep track of really quite large numbers. So if we call the two infinite numbers X and Y, the problem is simply to compute their products X times Y. So a quick digression, I'll certainly be the first to admit that my handwriting is not the greatest. I got a C in penmanship back in elementary school, and I think the teacher was being, a little generous. But, you know, it's an acquired taste but trust me, integer multiplication. Now, when we talk about procedures for multiplying two numbers, we're gonna be interested in ounting how many steps are required in order to execute, the multiplication. So how do we count a step? We'll talk more about this later, but for multiplying two numbers, let's just call a step the addition or multiplication of two single-digit numbers. So let's review the integer multiplication algorithm that we learned back in grade school just by working through a concrete example. Specifically,let's think of n = 4, so let's look at two 4-digit numbers. Let's say 5,6, 7, 8. And 1, 2, 3, 4. [sound of door closing] Now as you'll recall, the procedure we learned way back when was just to take each digit at the bottom number, and multiply it by each of the top numbers. And then to take each of those, N partial products, and add them up. So, for example, you start with the 4, you multiple it by 8, you get 32, carry the 3. 4 times 7 is 28, add the 3, you get 1. Carry the 3, and so on. So that gives you this first partial product, 22712. Then you do a shift, so you effectively put a 0 in this final digit, and you repeat that procedure using the next digit, the 3. So again, 3 times 8 is 4, carry the 2, and so on. And you can compute the final two partial products using the 2 and the 1. And having computed all of the partial products, you just add them up to get the final product. Now, the question I'm interested in, is: how much work, how many primitive operations did we do to multiply these two numbers and, more generally, how many does it require to multiply to N-digit numbers as a function of N? Well, just to get sort of a ballpark viewpoint for what's going on, we started with two N-digit numbers. And at the end of the day, we basically filled out a grid of size roughly N by N, give-and-take a little bit. So just in ballpark terms, it seems that multiplying two N-digit numbers require essentially a quadratic number of operations, a small number of operations to fill each entry in this grid. The grid is N by N roughly. So that's roughly N^2 operations. And a little more detail, we can look at each of the partial product separately. So, to compute say this first partial product,what did we do? We took the four, we multiplied it by each of the N digits on top. We also did some carries, so that affects things a little bit. But in the end, we did somewhere between say N and 2N, primitive operations to compute this first partial product. And that's true, of course, for each of the N partial products.So that's roughly N operations for each of the N partial products. So that's roughly N^2 operations to compute all of the partial products. Then we have to add it all up at the end, but that takes just an extra number of constant times n primitive operations to do all of those additions. So summarizing, overall, the number of primitive operations required to multiply two N-digit numbers using this procedure grows quadratically with the length N. So, for example, if you double the length of two numbers, you expect to do roughly four times as many primitive operations,if you use this iterative algorithm for multiplying two numbers. Now, depending on what type of 3rd grader you were, you might well have accepted this procedure as being the unique, or at least the optimal way of multiplying to N-digit numbers together. And if you wanna be an exponent algorithm designer, that kind of obedient attitude is something you're gonna have to learn to discard. Here's a favorite quote of mine. It's from a quite early textbook in the field, The Design and Analysis of Computer Algorithms by Aho, Hopcroft and Ullman. And after highlighting a number of algorithmic design techniques, they conclude by saying, "perhaps the most important principle for the good algorithm designer is to refuse to be content". So that's a beautifully accurate quote. I might rephrase it more succinctly by just saying, "if you want to be a good algorithm designer, you should adopt the following mantra: You should always be asking "can we do better?"". This is a particularly appropriate question to ask when you're faced with some kind of naive or obvious algorithm for solving a problem, like the 3rd grade algorithm for integer multiplication. This will come up over and over again. We'll see an algorithm, there will be an obvious solution but, with some extra algorithmic ingenuity, by detecting subtle structure in the problem, we'll be able to do significantly qualitatively better than the naive or obvious solution to the problem. So let's apply this cocky attitude to the problem of multiplying two integers. Let's just suppose, as a working hypothesis, that there is some procedure which is better than what we learned back in elementary school. What could it possibly look like? Well, as someone with some programming experience, you know that there are not only iterative algorithms, iterative programs, like the one we just outlined for integer multiplication. But there are also recursive programs. These are programs that call themselves typically, on an input of a similar form but with smaller size.So as a working hypothesis, let's imagine that there's some interesting recursive approach to multiplying two integers. What must such an algorithm look like? Well it must somehow fundamentally involve calling itself on smaller problems, that is, on smaller numbers, numbers with fewer digits. So what kind of decomposition on numbers could be used to enable this kind of recursive approach. Well, if you think about it, there's a pretty natural way to break a number with a bunch of digits into a couple numbers with fewer digits. Namely, just take the first half of the digits, the first N/2 digits, regard that as a number in its own array. And the second half of the digits, regard that as a number in its own array. So perhaps this decomposition will have some use in enabling a recursive attack on how to multiply two integers. ok sometimes you have interesting algorithmic ideas for which it's totally not obvious what kinds of properties, or what kind of performance guarantees those algorithms enjoy. So in particular, confronted with these three different integer multiplication algorithms, how can we NOT wonder which of the three is the best? Which of these requires the fewest number recursive operations to multiply two integers? In particular, does one of these novel recursive approaches do better than the algorithm we learned back in elementary school? The answer to that question is totally non-obvious and it requires non-trivial mathematical analysis to answer. In the upcoming lectures I'll provide you with the tools to not only precisely understand and answer this question but more generally to predict the running times of an entire genre of divide-and-conquer algorithms. ok for this time maybe its enough see you next time :)))

0 comments:

Post a Comment