Wednesday, March 30, 2016
On 10:47:00 AM by your education in Algorithm No comments
In this article and the next we are going to take you to the
next level and here under the hood into implementations of balanced pioneer
research trees. Now frankly when any great details of all balance binary research
tree implementations get pretty complicated and if you really want to understand
them at a fine grain level there is no substitute for reading an advanced
logarithms textbook that includes coverage of the topic and/or studying open source
Implementations of these data structures. I don't see the point of regurgitating
all of those details here. What I do see the point in doing, is giving you the
gist of some of the key ideas in these implementations. In this article I want to focus on a key primitive, that of
rotations which is common to All balanced by the of limitations. Whether is
red, black trees, EVL trees, B or B+ trees, whatever all of them use rotations.
In the next article we'll talk a little
bit more about the details of red, black trees in particular. So, what is the
points behind these
magically rotation operations? Well the goal is to do just
constant work, just re-wire a few pointers, and yet locally re-balance a search
tree, without violating the search tree properties/g Property. So there are two
flavors of rotations, left rotations and right rotations. In either case, when
you invoke a rotation it's on a parent child pair, in a search tree. If it's a
right child of the parent, then you use a left rotation. We'll discuss that on
this slide. And a right rotation is, in some sense, an inverse operation which
you use when you have a left child of the parent. So what's the generic picture
look like when you have a node x in a search tree and it has some right child
y? Well x, in general, might have some parents. x might also have some left
subtree. Let's call that left subtree of x, A. It could, of course, be And then
Y has it's 2 subtrees, lets call the left subtree of Y B and the right subtree
C. It's going to be very important that rotations preserve the search tree property
so to see why that's true lets just be clear on exactly which elements are
bigger then which other elements in this picture. So first of all Y being a right
child of X, Y's going to be bigger than x. Now all of the keys which line the subtree
a because they're to the left of x these keys are going to be even smaller than
x. By the same token anything in the subtree capital c, that's to the right of y
so all that stuff's going to be even bigger than y. What about sub tree capital
B? What about the nodes in there? Well, on the one hand, all of these are in
x's right sub tree, right? To get to any node in B, you go through x and then
you follow the right child to y. So that means everything is B is bigger than
x, yet, at the same time, it's all in the left sub tree of y so these are all
Things smaller than y. Summarizing all of the nodes in b have keys strictly in
between x and y. So now that we're clear on how all of these search keys in the
different parts of this picture relate to each other, I can describe the point
of a left rotation. Fundamentally the goal is to invert the relationship.
Between the nodes x and y. Currently x is the parent and y is the child. We
want to rewire a few pointers so that y is now the parent and x is the child.
Now what's cool is given that is our goal is pretty much a unique way to put
all these pieces back together to accomplish it. So lets just follow our nose.
So remember the goal Y should be the parent and X should be the child. Well, X is
less then And why and there's nothing we can do about that. So if x is going to
be a child of y, its got to the left child. So your first question might be well
what happened that x is parent. So x use to have some parent lets call it p and
x's new parent is y. Similarly y used to have parent x and there's a question
what should y new parent be? Well y is just going to inherit x's old parent p.
SO this change has no bearing for the search tree property. Either of this
collection of nodes was P's left sub tree, in that case all these nodes were
less than P, or this tree was P's right sub tree which in that case all of
these are bigger than P, but P can really care less , which of X or Y is it's
direct descendant. Now lets move on to thinking about how...what we should do
with the sub-trees A, B and C. So, we have 3 sub-trees we need to re-assemble into
this picture, and fortunately we have 3 slots available. X has both of its
child pointers available and Y has its right child available. So what Can we
put where? Well, a, is the sub tree of stuff which is less than both x and y.
So, that should sensibly be x's left child. That's exactly as it was before. By
the same token, capital C is the stuff bigger than both x and y, so that should
be, y, the bigger nodes child, right child just as, As before. And what's more
interesting is what happens to subtree capital B. So B used to be Y's left
child but that can't happen anymore, 'cause now X is Y's left child. So, the
only hope is to slot capital B into the only space we have for it, X's right
child. Fortunately for us this actually works, sliding B into the only open
space we have for it. X's right child does indeed preserve the switch tree property.
Recall we notice that every key in capital B is strictly between X and Y, therefore
it better be and X's right sub tree and it better be in Y's right sub tree, but
it is, that's exactly where we put it. So that's a left rotation, but if you
understand a left rotation then you understand a right rotation as well. but not now rotation for the next article.
Sunday, March 20, 2016
On 7:36:00 AM by your education in Algorithm No comments
the Balanced Binary Search Tree. Like our discussion of
other data structures we'll begin with the what. That is we'll take the
client's perspective and we'll ask what operations are supported by this data structure,
what can you actually use it for? Then we'll move on to the how and the why.
We'll peer under the hood of the data structure and look at how it's actually
implemented and then understanding the implementation to understand why the
operations have the running times that they do. So what is a Balanced Binary
Search Tree good for? Well, I recommend thinking about it as a dynamic version
of a sorted array. That is, if you have data store in a Balanced Binary Search
Tree, you can do pretty much anything on the data that you could if it was just
the static sorted array. But in addition, the data structure can accommodate
insertions and deletions. You can accommodate a dynamic set of data that you're
storing overtime. So to motivate the operations that a Balanced Binary Search Tree
supports, let's just start with the sorted array and look at some of the things
you can easily do with data that happens to be stored in such a way. So let's
think about an array that has numerical data although, generally as we've said,
in data structures is usually associated other data that's what you actually
care about and the numbers are just some unique identifier for each of the
records. So these might be an employee ID number, social security numbers,
packet ID numbers and network contacts, etcetera. So what are some things that
are easy to do given that your data is stored as a sorted array, most a bunch
of things? First of all, you can search and recall that searching in a sorted
array I generally done using binary search so this is how we used to look up
phone numbers when we have physical phone books. You'd start in the middle of
the phone book, if the name you were looking for was less than the midpoint,
you recurse on the left hand side, otherwise you'd recurse on the right hand
side. As we discussed back in the Master Method Lectures long ago, this is going
to run in logarithmic time. Roughly speaking, every time you recurse, you've thrown
out half of the array so you're guaranteed to terminate within a logarithmic
number of iterations so binary search is logarithmic search time. Something
else we discussed in previous lectures is the selection problem. So previously,
we discussed this in much harder context of unsorted arrays. Remember, the
selection problem in addition to array you're given in order statistic. So, if
your order statistic that your target is seventeen, that means you're looking
for the seventeenth smallest number that's stored in the array. So in previous
lectures, we worked very hard to get a linear time algorithm for this problem
in unsorted arrays. Now, in a sorted array, you want to know the seventeenth
smallest element in the array. Pretty easy problem, just return whatever element
happens to be in the seventeenth position of the array since the array is
sorted, that's where it is so no problem. It's already sorted constant time,
you can solve the selection problem. Of course, two special cases of the
selection problem are finding the minimum element of the array. That's just if
the order statistic problem with i = 1and the maximum element, that's just i =
n. So this just corresponds to returning the element that's in the first
position and the last position of the array respectively. Well let's do some
more brainstorming. What other operations could we implement on a sorted array?
Well here's a couple more. So there are operations called the Predecessor and
Successor operations. And so the way these work is, you start with one element.
So, say you start with a pointer to the 23, and you want to know where in this
array is the next smallest element. That's the predecessor query and the successor
operation returns the next largest element in the array. So the predecessor of
the 23 is the seventeen, the successor of the 23 would be the 30. And again in
a sorted array, these are trivial, right? You just know that predecessors just
one position back in the array, the successor is one position forward. So given
a pointer to the 23, you can return to 17 or the 30 in constant time. What
else? Well, how about the rank operation? So we haven't discussed this
operation in the past. So what rank is, this has for how many key stored in the
data structure are less than or equal to a given key. So for example, the rank
of 23 would be equal to 6. Because 6 of the 8 elements in the array are less than or equal to 23. And if you think about
it, implementing the rank operation is really no harder than implementing
search. All you do is search for the given key and wherever it is search
terminates in the array. You just look at the position in the array and boom,
that's the rank of that element. So for example, if you do a binary search for
23 and then when you terminates, you discover it is, they're in position number
six then you know the rank is six. If you do an unsuccessful search, say you search
for 21, well then you get stuck in between the 17 and the 23, and at that point
you can conclude that the rank of 21 in this array is five. Let me just wrap up
the list with the final operation which is trivial to implement in the sorted
array. Namely, you can output or print say the stored keys in sorted order
let's say from smallest to largest. And naturally, all you do here is a single scan
from left to right through the array, outputting whatever element you see next.
The time required is constant per element or linear overall. So that's a quite impressive
list of supported operations. Could you really be so greedy as to want still
more from our data structure? Well yeah, certainly. We definitely want more than
just what we have on the slide. The reason being, these are operations that operate
on a static data set which is not changing overtime. But the world in general is
dynamic. For example, if you are running a company and keeping track of the employees,
sometimes you get new employees, sometimes employees leave. That is one of the
data structure that not only supports these kinds of operations but also,
insertions and deletions. Now of course it's not that it's impossible to
implement insert or delete in a sorted array, it's just that they're going to
run way too slow. In general, you have to copy over a linear amount of stuff on
an insertion or deletion if you want to maintain the sorted array property. So
this linear time performance when insertion and deletion is unacceptable unless
you barely ever do those operations.
So, the raison d'etre of the Balanced Binary Search Tree is to
implement this exact same set of operations just as rich as that's supported by
a sorted array but in addition, insertions and deletions. Now, a few of these
operations won't be quite as fast or we have to give up a little bit instead of
constant time, the one in logarithmic time and we still got logarithmic time
for all of these operations, linear time
for outputting the elements in sort of order plus, we'll be able to insert and
delete in logarithmic time so let me just spell that out in a little more
detail. So, a Balanced Binary Search Tree will act like a sorted array plus, it
will have fast, meaning logarithmic time
inserts and deletes. So let's go ahead and spell out all of those operations.
So search is going to run in O(log n) time, just like before. Select runs in
constant time in a sorted array and here it's going to take logarithmic, so we'll
give up a little bit on the selection problem but we'll still be able to do it
quite quickly. Even on the special cases of finding the minimum or finding the
maximum in our, in our data structure, we're going to need logarithmic time in
general. Same thing for finding predecessors and successors they're not, they're
no longer constant time, they go with logarithmic. Rank took as logarithmic time
and the, even the sorted array version and that will remain logarithmic here.
As we'll see, we lose essentially nothing over the sorted array, if we want to
output the key values in sorted order say from smallest to largest. And crucially,
we have two more fast operations compared to the sorted array of data
structure. We can insert stuff so if you hire a new employee, you can insert them
into your data structure. If an employee decides to leave, you can remove them
from the data structure. You do not have to spend linear time like you did for sort
of array, you only have to spend the logarithmic time whereas always n is the number
of keys being stored in the data structure. So the key takeaway here is that,
if you have data and it has keys which come from a totally ordered set like,
say numeric keys, then a Balanced Binary Search Tree supports a very rich collection
of operations. So if you anticipate doing a lot of different processing using
the ordering information of all of these keys, then you really might want to
consider a Balanced Binary Search Tree to maintain them. Well then, keep in
mind though is that we have seen a couple of other data structures which don't
do quite as much as balanced binary search trees but what they do, they do better.
We already, we just discussed in the last slide of the sorted array. So, if you
have a static data set, you don't need inserts and deletes. Well then by all means,
don't bother with Balanced Binary Search Tree that use a sorted array because
it will do everything super fast. But, we also sought through dynamic data structures
which don't do as much but do it, but what they do, they do very well. So, we
saw a heap, so what the heap is good for is it's just as dynamic as a search
tree. It allows insertions and deletions both in logarithmic time. And in addition,
it keeps track of the minimum element or the maximum element. Remember in a
heap, you can choose whether you want to keep track of the minimum or keep
track of the maximum but unlike in a search tree, a heap does not
simultaneously keep track of the minimum and the maximum. So if you just need
those three operations, insertions, deletions and remembering the smallest, and
this would be the case for example in a priority queue or scheduling application
as discussed in the heap videos. Then, a Binary Search Tree is over kill. You
might want to consider a heap instead. In fact, the benefits of a heap don't
show up in the big O notation here both have logarithmic operation time but the
constant factors both in space and time are going to be faster with a heap then
with a Balanced Binary Search Tree. The other dynamic data structure that we discussed
is a hash table. And what hash tables are really, really good at is handling
insertions and searches, that is look ups. Some, sometimes, depending on the implementation also handle deletions really
well also. So, if you don't actually need to remember things like minima,
maxima or remember ordering information on the keys, you just have to remember
what's there and what's not. Then the data structure of choice is definitely the
hash table, not the balance binary search tree. Again, the Balance Binary
Search Tree would be fine and we'd give you logarithmic look up time but it's
kind of over kill for the problem. All you need is fast look ups. A hash table
recall will give you constant time look ups. So that will be a noticeable win
over the Balanced Binary Search Tree. But if you want a very rich set of
operations for processing your data. Then, the Balanced Binary Search Tree could
be the optimal data structure for your needs.
Friday, March 18, 2016
On 8:17:00 PM by your education in Algorithm No comments
Randomized contraction algorithm for computing the minimum cut of a graph. Let's just recall what the minimum cut problem is. We're given as input an undirected graph. And the parallel edges are allowed.
So, here's the random contraction algorithm. So, this algorithm was devised by David Karger back when he was an early Ph.D student here at Stanford, and this was in the early 90s. So like I said, quote unquote only about twenty years ago. And the basic idea is to use random sampling. Now, we'd known forever, right, ever since QuickSort, that random sampling could be a good idea in certain context, in particular when you're sorting and searching. Now one of the things that was such a breakthrough about Karger's contraction algorithm is, it showed that random sampling can be extremely effective for fundamental graph problems. So here's how it works. We're just gonna have one main loop. Each iteration of this while-Loop is going to decrease the number of vertices in the graph by 1, and we're gonna terminate when we get down to just two vertices remaining. Now, in a given iteration, here's the random sampling: amongst all of the edges that remain in the graph to this point, we're going to choose one of those edges uniformly at random. Each edge is equally likely. Once you've chosen an edge, that's when we do the contraction. So we take the two endpoints of the edge, call them the vertex u and the vertex v, and we fuse them into a single vertex that represents both of them. This may become more clear when I go through a couple examples on the next couple of slides. This merging may create parallel edges, even if you didn't have them before. That's okay. We're gonna leave the parallel edges. And it may create a self-loop edge pointer that both of the endpoints is the same. And self-loops are stupid, so we're just gonna delete as they arise. Each generation decreases the number of vertices that remain. We start with N vertices. We end up with 2. So after N-2 generations, that's when we stop and at that point we return the cuts represented by those two final vertices. You might well be wondering what I mean by the cut represented by the final two vertices.
But I think that will become clear in the examples, which I'll proceed to now. So suppose the input graph is the following four node, four edge graph. There's a square plus one diagonal. So, how would the contraction algorithm work on this graph? Well, of course, it's a randomized algorithm so it could work in different ways. And so, we're gonna look at two different trajectories. In the first iteration each of these five edges is equally likely. Each is chosen for contraction with twenty percent probability. For concreteness, let's say that the algorithm happens to choose this edge to contract, to fuse the two endpoints. After the fusion these two vertices on the left have become one, whereas the two vertices on the right are still hanging around like they always were. So, the edge between the two original vertices is unchanged. The contracted edge between the two vertices on the left has gotten sucked up, so that's gone. And so what remains are these two edges here. The edge on top, and the diagonal. And those are now parallel edges, between the fused node and the upper right node. And then I also shouldn't forget the bottom edge, which is edge from the lower right node to the super node. So that's what we mean by taking a pair of the vertices and contracting them. The edge that was previously connected with them vanishes, and then all the other edges just get pulled into the fusion. >>So that's the first iteration of Karger's algorithm of one possible execution. So now we proceed to the second iteration of the contraction algorithm, and the same thing happens all over again. We pick an edge, uniformly at random. Now there's only four edges that remain, each of which is equally likely to be chosen, so the 25% probability. For concreteness, let's say that in the second iteration, we wind up choosing one of the two parallel edges, say this one here. So what happens? Well, now, instead of three vertices, we go down to 2. We have the original bottom right vertex that hasn't participated in any contractions at all, so that's as it was. And then we have the second vertex, which actually represents diffusion of all of the other three vertices. So two of them were fused, the leftmost vertices were fused in iteration 1. And now the upper right vertex got fused into with them to create this super node representing three original vertices. So, what happens to the four edges? Well, the contracted one disappears. That just gets sucked into the super node, and we never see it again. Again, and then the other three go, and where there's, go where they're supposed to go. So there's the edge that used to be the right most edge. That has no hash mark. There's the edge with two hash marks. That goes between the, the same two nodes that it did before. Just the super node is now an even bigger node representing three nodes. And then the edge which was parallel to the one that we contracted, the other one with a hash mark becomes a self-loop. And remember what the, what the algorithm does is, whenever self loops like this appear, they get deleted automatically. And now that we've done our N-2 iterations, we're down to just two nodes. We return the corresponding cut. By corresponding cut, what I mean is, one group of the cut is the vertices that got fused into each other, and wound up corresponding to the super node. In this case, everything but the bottom right node, And then the other group is the original nodes corresponding to the other super node of the contracted graphs, which, in this case, in just the bottom right node by itself. So this Set A is going to be these three nodes here, which all got fused into each other, contracted into each other. And B is going to be this node over here which never participated in any contractions at all. And what's cool is, you'll notice, this does, in fact, define a min cut. There are two edges crossing this cut. This one, the rightmost one and the bottommost one. And I'll leave it for you to check that there is no cut in this graph with fewer than two crossing edges, so this is in fact a min cut. Of course, this is a randomized algorithm, and randomized algorithms can behave differently on different executions.
In fact, they will arise naturally throughout the course of
the algorithm. That is, we're given pair of vertices, which have multiple edges
which have that pair as endpoints. Now, I do sort of assume you've watched the
other video on how graphs are actually represented, although that's not going
to play a major role in the description of this particular algorithm. And,
again, the goal is to compute the cut. So, a cut is a partition of the graph
vertices into two groups, A and B. The number of edges crossing the cut is simply
those that have one endpoint on each side. And amongst all the exponentially
possible cuts, we want to identify one that has The fewest number of crossing
edges, or a "min cut".
So, here's the random contraction algorithm. So, this algorithm was devised by David Karger back when he was an early Ph.D student here at Stanford, and this was in the early 90s. So like I said, quote unquote only about twenty years ago. And the basic idea is to use random sampling. Now, we'd known forever, right, ever since QuickSort, that random sampling could be a good idea in certain context, in particular when you're sorting and searching. Now one of the things that was such a breakthrough about Karger's contraction algorithm is, it showed that random sampling can be extremely effective for fundamental graph problems. So here's how it works. We're just gonna have one main loop. Each iteration of this while-Loop is going to decrease the number of vertices in the graph by 1, and we're gonna terminate when we get down to just two vertices remaining. Now, in a given iteration, here's the random sampling: amongst all of the edges that remain in the graph to this point, we're going to choose one of those edges uniformly at random. Each edge is equally likely. Once you've chosen an edge, that's when we do the contraction. So we take the two endpoints of the edge, call them the vertex u and the vertex v, and we fuse them into a single vertex that represents both of them. This may become more clear when I go through a couple examples on the next couple of slides. This merging may create parallel edges, even if you didn't have them before. That's okay. We're gonna leave the parallel edges. And it may create a self-loop edge pointer that both of the endpoints is the same. And self-loops are stupid, so we're just gonna delete as they arise. Each generation decreases the number of vertices that remain. We start with N vertices. We end up with 2. So after N-2 generations, that's when we stop and at that point we return the cuts represented by those two final vertices. You might well be wondering what I mean by the cut represented by the final two vertices.
But I think that will become clear in the examples, which I'll proceed to now. So suppose the input graph is the following four node, four edge graph. There's a square plus one diagonal. So, how would the contraction algorithm work on this graph? Well, of course, it's a randomized algorithm so it could work in different ways. And so, we're gonna look at two different trajectories. In the first iteration each of these five edges is equally likely. Each is chosen for contraction with twenty percent probability. For concreteness, let's say that the algorithm happens to choose this edge to contract, to fuse the two endpoints. After the fusion these two vertices on the left have become one, whereas the two vertices on the right are still hanging around like they always were. So, the edge between the two original vertices is unchanged. The contracted edge between the two vertices on the left has gotten sucked up, so that's gone. And so what remains are these two edges here. The edge on top, and the diagonal. And those are now parallel edges, between the fused node and the upper right node. And then I also shouldn't forget the bottom edge, which is edge from the lower right node to the super node. So that's what we mean by taking a pair of the vertices and contracting them. The edge that was previously connected with them vanishes, and then all the other edges just get pulled into the fusion. >>So that's the first iteration of Karger's algorithm of one possible execution. So now we proceed to the second iteration of the contraction algorithm, and the same thing happens all over again. We pick an edge, uniformly at random. Now there's only four edges that remain, each of which is equally likely to be chosen, so the 25% probability. For concreteness, let's say that in the second iteration, we wind up choosing one of the two parallel edges, say this one here. So what happens? Well, now, instead of three vertices, we go down to 2. We have the original bottom right vertex that hasn't participated in any contractions at all, so that's as it was. And then we have the second vertex, which actually represents diffusion of all of the other three vertices. So two of them were fused, the leftmost vertices were fused in iteration 1. And now the upper right vertex got fused into with them to create this super node representing three original vertices. So, what happens to the four edges? Well, the contracted one disappears. That just gets sucked into the super node, and we never see it again. Again, and then the other three go, and where there's, go where they're supposed to go. So there's the edge that used to be the right most edge. That has no hash mark. There's the edge with two hash marks. That goes between the, the same two nodes that it did before. Just the super node is now an even bigger node representing three nodes. And then the edge which was parallel to the one that we contracted, the other one with a hash mark becomes a self-loop. And remember what the, what the algorithm does is, whenever self loops like this appear, they get deleted automatically. And now that we've done our N-2 iterations, we're down to just two nodes. We return the corresponding cut. By corresponding cut, what I mean is, one group of the cut is the vertices that got fused into each other, and wound up corresponding to the super node. In this case, everything but the bottom right node, And then the other group is the original nodes corresponding to the other super node of the contracted graphs, which, in this case, in just the bottom right node by itself. So this Set A is going to be these three nodes here, which all got fused into each other, contracted into each other. And B is going to be this node over here which never participated in any contractions at all. And what's cool is, you'll notice, this does, in fact, define a min cut. There are two edges crossing this cut. This one, the rightmost one and the bottommost one. And I'll leave it for you to check that there is no cut in this graph with fewer than two crossing edges, so this is in fact a min cut. Of course, this is a randomized algorithm, and randomized algorithms can behave differently on different executions.
So let's look at a second possible execution of the
contraction algorithm on this exact same input. Let's even suppose the first
iteration goes about in exactly the same way. So, in particular, this leftmost
edge is gonna get chosen in the first iteration. Then instead of choosing one
of the two parallel edges, which suppose that we choose the rightmost edge to
contract in the second iteration. Totally possible, 25% chance that it's gonna
happen. Now what happens after the contraction? Well, again, we're gonna be
left with two nodes, no surprise there. The contracted node gets sucked into
oblivion and vanishes. But the other three edges, the ones with the hash marks,
all stick around, and become parallel edges between these two final nodes.
This, again, corresponds to a cut (A, B), where A is the left twovertices, and
B is the right two vertices. Now, this cut you'll notice has three crossing edges,
and we've already seen that there is a cut with two crossing edges. Therefore,
this is not a min cut. So what have we learned? We've learned that, the
contractual algorithm sometimes identifies the min cut, and sometimes it does
not. It depends on the random choices that it makes. It depends on which edges it
chooses to randomly contract. So the obvious question is, you know, is this a useful
algorithm. So in particular, what is the probability that it gets the right answer?
We know it's bigger than 0, and we know it's less than 1. Is it close to 1, or
is it close to 0? So we find ourselves in a familiar position. We have what
seems like a quite sweet algorithm, this random contraction algorithm. And we
don't really know if it's good or not. We don't really know how often it works,
and we're going to need to do a little bit of math to answer that question. So
in particular, we'll need some conditional probability. So for those of you, who
need a refresher, go to your favorite source, or you can watch the Probability
Review Part II, to get a refresher on conditional probability and independence.
Once you have that in your mathematical toolbox, we'll be able to totally nail
this question. Get a very precise answer to exactly how frequently the contraction
algorithm successfully computes the
minimum cut.
Subscribe to:
Posts (Atom)
Search
Popular Posts
-
Do you know when and how to use basic data structures is an essential skill for the serious programmer. Data structures are used in pretty ...
-
the Balanced Binary Search Tree. Like our discussion of other data structures we'll begin with the what. That is we'll take t...
-
there may be distinct cuts which are tied for the fewest number of crossing edges. For a concrete example you could think ...
Blog Archive
Powered by Blogger.