Top Algorithms and Data Structures

If you want to be a software engineer but aren’t sure where to begin, here’s a hint: algorithms and data structures are the place to start. You’ll start seeing these programming foundations everywhere until you get the hang of them. And the more algorithms and data structures you master, the more they’ll help you advance in your software engineering career.

Algorithms and Data Structures

Let’s look at Search and Sort, two forms of algorithms that you can’t live without to get you started. Then we will delve into the rest of the landscape, which includes trees, diagrams, dynamic programming, and much more.

Search algorithms

In general, there are two types of search algorithms that you should be familiar with right away: linear and binary. Depth First Search (DFS) and Breadth-First Search (BFS) are also essential, but we’ll save them later in the graph traversal portion. However, note that they have efficient applications as follows:

  • Search engines use it to crawl the internet.
  • In Artificial intelligence, it is used to develop bots, such as a chess bot.
  • Many other applications exist, such as finding the shortest path between two cities on a map.
  • Check-in a straight line

The linear and binary algorithms are named after the length of time (time complexity) that a search would take depending on the input’s size being searched. If you have 100 items to search for, for example, the worst-case scenario will be that you have to look at every item in the input before you find your desired value using linear search algorithms.

It’s called linear since the amount of time it takes to search is proportional to the number of items found. In a nutshell, linear = straightforward (there is nothing clever about the algorithm).

Consider the following scenario: you’re searching for your friend Tom in a long line of people who aren’t in any particular order. You already know how Tom looks, so all you have to do now is look at each person in turn before you recognize or don’t recognize Tom. That is everything there is to it. You’re following the linear search algorithm by doing so.

The following are some examples of applications of search algorithms:

  • It uses binary search and string-matching to quickly return results when searching for a song name in a sorted list of songs.
  • Git bisect was used for debugging in git.
  • Binary Search

The term binary search comes from the word binary, which means “of or relating to two items,” The algorithm works by splitting the input into two sections before identifying the object being searched for.

One half has the search object, while the other does not.

The method continues until the item being searched is the location where the input is broken. Binary quest is essentially a highly disciplined approach to the elimination process. While it is faster than linear search, it can only be used for ordered sequences.

Illustration:

Assume you’re looking for your friend Tom (5’5″) in a single-file line of people who have been ordered by height from shortest to tallest, from left to right. It’s a long line, and you don’t have time to go through it one by one, weighing everyone’s height against Bin’s.

What options do you have?

This is where binary search comes in. You choose someone in the very middle of the line and take their height. Let say they’re 5’7″ tall. So you know right away that this guy and everyone to their right isn’t Tom. Now that you’ve cut your dilemma in half, you can focus on the rest of the line and select the middle person once more. They are 5’4″ tall.

As a result, you can rule out that individual and everyone to their left, effectively halving the issue. And so forth. You find Tom in a fraction of the time it took you using the linear approach after just five or six of these splits. You used the binary search algorithm in doing so.

Sorting

One of the most popular programming tasks you’ll do as a developer is arranging, or order, lists of objects. In Computer Science, sorting is the most studied concept. The idea is to place a list’s objects in a particular order. Even though every major programming language has built-in sorting libraries, knowing how they operate is useful.

You may use all of these, depending on your needs.

More importantly, one should be conscious of when and where they should be used. The following are some examples of where sorting methods are used directly:

  • In e-commerce websites, you can sort by price, popularity, and other factors.
  • Information Searching to locate what you are looking for efficiently
  • When dealing with numerical computations
  • It is important in string processing algorithms

We’ll look at MergeSort and QuickSort, two of the most useful sorting algorithms. Others include Bucket Sort, Heap Sort, and Counting Sort.

MergeSort

Let’s say you need to build an organized line of people from an unordered community rather than coming across the ordered line of people in the example above. You don’t have much time, so you devise a plan to expedite the process.

To begin, split the group of people who are all huddled together into two groups. Then each of the two classes is split into two more groups, and so on until you’re just dealing with individuals. Then you start matching the people together, with the taller of the two in each pair standing to the other’s right.

Everyone is easily sorted into these left-right paired pairs.

After that, you merge the ordered pairs into ordered groups of four, then ordered groups of four into ordered groups of eight, and so forth. Finally, you discover that you have a complete, height-ordered line of people, similar to the one you saw earlier. You accomplished your goal by using the MergeSort algorithm without even realizing it.

QuickSort

QuickSort is a bit too difficult to clarify to anyone, so let’s get a little closer to the code. To begin, imagine you have an eight-number unordered list, or series, that you want to order.

<div class="wp-block-codemirror-blocks-code-block code-block">
<pre>4   2   13    6   15   12   7   9</pre>
</div>

You might use MergeSort, but you’ve heard that QuickSort is quicker, so you give it a shot. The first step is to pick a pivot number from the list.

Choosing the right pivot number will make a huge difference in how quickly your QuickSort runs, and there are ready-made formulas for doing so. But for the time being, let’s keep it straightforward and use the last number in the list as our pivot number: 9

We’ll build a separator at the beginning of the array and represent it with the pound sign to make the next move easier to follow.

<div class="wp-block-codemirror-blocks-code-block code-block">
<pre>#  9  4   2   13   6   15   12   7   9</pre>
</div>

As we move from left to right across the series, we place any number less than the pivot (9) to the separator’s left. And any number greater than (or equal to) the pivot to the right.

We’ll begin with the first digit in our array: 4

We raise the separator one variable to the left to move it to the left of the separator:

<div class="wp-block-codemirror-blocks-code-block code-block">
<pre>4   #   2   13   6   15   12   7   9</pre>
</div>

We repeat the process with the following number: 2

<div class="wp-block-codemirror-blocks-code-block code-block">
<pre>4   2   #   13   6   15   12   7   9</pre>
</div>

But then we get to 13, which is greater than the pivot number 9 and is already on the separator’s right side. As a result, we leave it alone.

The next number is 6, which must be placed to the left of the separator. To get it into line, we first swap it with 13:

<div class="wp-block-codemirror-blocks-code-block code-block">
<pre>4   2   #   13   6   15   12   7   9</pre>
</div>

to

<div class="wp-block-codemirror-blocks-code-block code-block">
<pre>4   2   #   6   13   15   12   7   9</pre>
</div>

The separator is then pushed past it:

<div class="wp-block-codemirror-blocks-code-block code-block">
<pre>4   2   6   #   13   15   12   7   9</pre>
</div>

The next number is 15, which is already to the separator’s right, so we don’t bother with it. Then there are twelve, and we do the same thing. But 7, the last number before the pivot, needs the same assistance that 6 did to pass.

To get it into line, we swap 7 for 13:

<div class="wp-block-codemirror-blocks-code-block code-block">
<pre>4   2   6   #  7   15   12   13   9</pre>
</div>

The separator is then pushed past it once more:

<div class="wp-block-codemirror-blocks-code-block code-block">
<pre>4   2   6   7   #  15   12   13   9</pre>
</div>

Finally, we arrive at our pivot point: 9

We swap 15 for 9 using the same logic as before to get the pivot number where it needs to be:

<div class="wp-block-codemirror-blocks-code-block code-block">
<pre>4   2   6   7   #   9   12   13   15</pre>
</div>

We’ve completed our first QuickSort loop. Because all the numbers to the left of 9 are now smaller than 9. And all the numbers to the right of 9 are now greater than 9. Next, each set of four numbers on either side of the separator will be treated as a new array to which QuickSort will be applied. We won’t go into depth, but the next round will provide us with four pairs of numbers to apply our final round of QuickSort with ease. The result will be the following ordered list, which will take us less time to create than if we used MergeSort:

<div class="wp-block-codemirror-blocks-code-block code-block">
<pre>2   4   6   7   9   12   13   15</pre>
</div>

Checklist for Sorting algorithms

The most popular sorting algorithms are listed below, along with some suggestions for when to use them.

Get these into your head because they’re all over the place!

Merge Sort

This is about your only choice when you need a secure, O(N log N) sort.
The only drawbacks are that it needs O(N) auxiliary space and has a slightly higher constant than a fast form.

A few in-place merge sorts, but they’re all either unreliable or worse than O. (N log N). Even the O(N log N) in position sorts have a constant that is so much greater than the plain old merge sort that they are more theoretical musings than practical algorithms.

Introsort

Introsort is a fast sort that moves to heapsort to prevent quick Sort’s O(N2) worst case after a certain recursion depth.

Since you get the average case of a fast sort with guaranteed O(N log N) results, it’s almost always better than a standard quicksort.

In significantly memory-constrained systems, the only reason to use a heap sort instead of this is possible in severely memory-constrained systems where O(log N) stack space is practically significant. When N is guaranteed to be small, such as in the base case of a quick sort or mergeSort, insertion sort is used, although this is O(N2), the constant is very small, and the Sort is stable.

When you need to do something fast and dirty and can’t use the standard library’s sorting algorithm, use bubble sort or selection sort. The only advantage these have over insertion sort is that they are a little easier to set up.

When you don’t need a stable sort, and average-case performance is more important than worst-case performance, use quick Sort. On average, a quick sort takes O(N log N) time, and in the worst case, O(N2).
For recursion, a good implementation uses O(log N) auxiliary storage in the form of stack space.

HeapSort

When you don’t need a stable sort and are more concerned with worst-case than average-case results. It’s guaranteed to be O(N log N) and uses O(1) auxiliary space, so even with substantial inputs, you won’t run out of heap or stack space. It is possible to crack the O(N log N) barrier and Sort in O(N) under some particular conditions!

Here are a few examples of when it’s worth a shot:

 

  • When sorting integers with a restricted range, use the counting kind.
  • When log(N) is significantly greater than K, where K is the number of radix digits, radix sorting is used.
  • When you can guarantee that your input is roughly evenly distributed, you can use a bucket type.

Data Structures that are a must-have

Search, and Sort is two of the most well-known and trusted paths to take when learning about algorithms and data structures. However, no survey of the landscape would be complete without mentioning the following data structures. They are responsible for data structures and include arrays, linked lists, stacks, queues, heaps, graphs, and Trie.

Trees

Trees are one of the most popular data structures, so you’ll see many of them during your time here. They’re also one of the simplest data structures to imagine and comprehend.

Almost all of the language used in the field of trees is derived from a family tree definition. A node is a parent, a child, a sibling, an ancestor, or a descendant, which depends on its place in the tree.

If the tree has certain properties, storing objects in trees allows us to locate items more easily.

Graphs

A tree is essentially a special case of a graph if you want to get technical. It’s no wonder, then, that graphs are ubiquitous in everyday life and computer science. Graph traversal, in particular, is essential, and there are two algorithms you should learn right away: Breadth-First Search (BFS) and Depth First Search (DFS).

Consider yourself at the top of a pyramid-shaped skyscraper, looking for your friend Tom. Then you remember that the pyramid is similar to a graph, with each room serving as a node.

Breadth-First Search (BFS)

You can roughly conduct a breadth-first quest for your friend Tom by traversing the pyramid floor by floor, beginning at the top and working your way down.

search in-depth (DFS)

Rather than going floor by floor, you can do a depth-first search for your friend Tom by going elevator by elevator, searching the nearest rooms on each stop as you traverse the pyramid in vertical slices.

Programming in a dynamic environment (DP)

Dynamic programming (DP) can come to your rescue if you’re faced with a massive, heavyweight programming problem that you can’t imagine solving on your own. In fact, DP will assist you in breaking down a large issue into smaller components.

As DP solves one of the sub-problems, it saves the results and then combines all of the saved results to tackle the main problem.

Hashing

The way we find data has changed dramatically in recent years. Although binary search used to be the most common method, hash lookup is becoming more popular.

Although we’ll skip the time-complexity comparison, for now, suffice it to say that hashing is often much faster when it comes to searching through lists of millions of objects.

Matching string patterns

Regular expressions and string pattern matching are two terms that are sometimes used interchangeably. Commonly, regular expressions are also known as regex. The idea is that you’re looking for a pattern that can be found in several items rather than a single object.

Let’s say you’re looking for all sentences that end in a question mark in a book: that’s a task for string pattern matching.

There’s moreā€¦

We’ve covered a lot of terrains so far. But we’ve just scratched the surface of the vast field of algorithms and data structures.

Conclusion

All of the above should hammer home the point that algorithms and data structures will pave -and maybe even pay the way for you if you want to be a software engineer. You might want to start with search (linear and binary) and Sort for a solid base.

If you’ve grasped the basics of these programming foundations, it’s time to brush up on trees, graph traversal (BFS and DFS), dynamic programming, and string pattern matching.

However, the main aim is to begin living and breathing algorithms and data structures: devising step-by-step solutions to real-world problems and visualizing complex scenarios using simple data structures.

If you can master that, coding will become second nature.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *