In the previous blog, I mentioned I would create a separate blog on the Greedy Algorithm, and here it is. This is the blog!

Let's get started!

*What is Greedy Algorithm?*

Imagine a pile of both good bricks and broken pieces. Now, you are told to build a wall. What will you do? What are the steps? First, you will pick the best brick and place it in your wall. Then, you will look for the next best brick. So, you are picking the best brick available at that moment and continuing to build your wall. The Greedy Algorithm works just like this. It picks the best solution for the moment and then looks for the next best option.

The properties of Greedy algorith are -

It is am algorithmic paradigm that builds the solution piece by piece

In each step it chooses the piece that offers most obvious and immediate benefit

It fits perfectly for those solutions in which local optimal solution lead to global solution.

Now, what is local and global solution? Well, when you are putting one by one brick to your wall - that is local solution. And the finished project is global solution. Or you can say local solutions are the puzzle pieces for global solution.

*Greedy Algorithms (Insertion Sort, Selection Sort, Prim, Kruskal, Topological)*

Let's have a look why these are called greedy algorithm.

### Insertion Sort

The first thing we do is divide the array into two parts: one sorted and one unsorted. From the unsorted array, we take the first element and place it in its correct position in the sorted array. After a few iterations, the left side is fully sorted. Now, the first element from the unsorted part is 6, it will go between 5 and 7.

If we closely observe the steps, we can see that f follow the principle of greedy algorithm, that is - finding the best solution at that moment.

### Selection Sort

Here, we also have two parts: sorted and unsorted. From the unsorted part, we find the minimum value and place it in the sorted array.

As you can see, in each step, we are finding the local optimum solution for the sorted array. After finding all the local optimum solutions, it leads us to the global solution. So, this approach is also a greedy approach.

### Topological

Let's start from C. E depends on C, so C is not the best solution. Moving to E, H depends on E, so E isn't the best solution either. Next, H has no dependencies, so H is our best solution. We take H and push it on the stack(local solution). Then we add H's parent, E, and continue. Popping these elements from the stack, the answer will be ACEH.

So here, in each step, we check if the element leads to the local solution. If it does, we take it; otherwise, we ignore it.

### Prim

It is a greedy algorithm, it finds a minimum spanning tree for weighted undirected graphs in the following ways:

Take any vertex as a source, set its weight to 0 and other weights' to infinity.

For every adjacent vertices, if the current weight is more than current edge, then we set it to current edge.

The we mark the current vertex as visited (we don't visit a visited vertex)

Repeat these steps for all other in increasing order of weight.

So, by finding all the local best solutions - we find a global solution. So this is also a greedy algorithm.

### Kruskal

It is a greedy algorithm, it finds a minimum spanning tree for weighted undirected graphs in two ways:

Add increasing cost edges at each step

Avoid any cycle at each step.

In each steps, we are finding local optimum solution then by joining them we are finding the global optimum solution. This is a greedy approach. So this is also a greedy algorithm.

*Activity Selection Problem*

Given N activities with their start and end times, we need to choose the maximum number of activities that one person can do, assuming that a person can only work on one activity at a time

For activity 1, from 0-6, the person is already occupied. This means they will be free after 6, so they can't perform any other activity between 3 and 4. After finishing activity 1, they can move to the next activity. They can only do A1; they can't do A2 because, at the start time of activity 2, they will be busy with A1. The same goes for A3, A4, and A5. After finishing A1, they can only do A6. So, we can only do 2 activities here.

**Can we improve this situation?** Yes, we can.

A person cannot start another activity until they are free. They only become free after finishing the current activity. The key factor here is the finish time. One way to improve this situation is by sorting the activities based on their finish times. By sorting them this way, we can complete 4 activities instead of just 2.

This is a greedy algorithm because, at each step, we choose the best local solution. The idea behind a greedy algorithm is to find local optimal solutions and combine them to get the global optimal solution.

*Activity Selection Problem in Python*

Code:

```
activities = [["A1", 0, 6],
["A2", 3, 4],
["A3", 1, 2],
["A4", 5, 8],
["A5", 5, 7],
["A6", 8, 9]
]
def printMaxActivities(activities):
activities.sort(keys = lambda x: x[2]) #sorting 2D array - O(NlogN)
i = 0
firstA = activities[i][0] #3 line of code - O(1)
print(firstA)
for j in range(len(activities)): #for loop - O(N)
if activities[j][i] > activities[i][2]: #if condition - O(1)
print(activities[j][0])
i = j
printMaxActivities(activities)
```

Time complexity: O(NLogN), Space complexity: O(1)

*Coin Chance Problem*

You are given coins of different denominations and total amount of money. Find the minimum number of coins that you need to make up the given amount.

Given: Infinite supply of denominations: {1,2,5,10,20,50,100,1000}

Total amount | Answer | |

Example 1 | 70 | 2 coins → 50 + 20 = 70 |

Example 2 | 122 | 3 coins → 100 + 20 + 2 = 122 |

Example 3 | 2035 | 5 coins →1000 + 1000 + 20 + 10 + 5 |

Logic:

*Coin Chance Problem in Python*

```
def coinChange(totalNumber, coins):
N = totalNumber
coins.sort()
index = len(coins)-1
while True: #while loop - O(N)
coinValue = coins[index]
if N >= coinValue:
print(coinValue) #O(1)
N = N - coinValue
if N < coinValue: #O(1)
index -= 1
if N == 0: #O(1)
break
coins = [1,2,5,20,50,100]
coinChange(201, coins)
```

Time complexity: O(N), Space complexity: O(1)

*Fractional Knapsack Problem*

Given a set of items, each with a weight and a value, determine how many of each item to include so that the total weight is within a given limit and the total value is as high as possible. Items can be divided into smaller parts.

Consider the knapsack as a box. This box has a limit of W weight. You cannot fill it up with 31kg. But this 30kg has to be with the maximum value.

If we take the red and blue boxes, the value in our knapsack will be 220. **Can we improve this? Yes, we can!**

Using a greedy algorithm, we can increase the value to 240. We take the green box (value 60), the blue box (value 100), and 2/3 of the red box.

How did we do this? First, we need to find the item with the highest value. To do this, we calculate the density of each box by dividing its value by its weight. This gives us the value per kilogram. For the green box, the density is 6, for the blue box, it's 5, and for the red box, it's 4. Now, we will order these densities in descending order

*Fractional Knapsack Problem in Python*

```
class Item:
def __init__(self, weight, value):
self.weight = weight
self.value = value
self.ration = value/weight
def knapsackMethod(items, capacity):
items.sort(key = lambda x: x.ratio, reverse = True) # sorting list of objects - O(NlogN)
usedCapacity = 0
totalValue = 0
for i in i tems:
if usedCapacity + i.weight <= capacity:
usedCapacity += i.weight
totalValue += i.value
else:
unusedWeight = capacity - usedCapacity
value = i.ration * unusedWeight
usedCapacity += unusedWeight
totalValue += value
if usedCapacity == capacity:
break
print("Total value obtained: " + str(totalValue))
item1 = Item(20, 100)
item2 = Item(30, 120)
item3 = Item(10, 60)
cList = [item1, item2. item3]
knapsackMethod(cList, 50)
```

Time complexity: O(NLogN), Space complexity: O(1)