I’ve Cracked the Code, Please Call Tech Support- Episode 13, Drop-Shadows of the Empire
Another pretty good week, all things considered. Moving along, feeling more comfortable with Object Oriented Programming, and settling in for the winter season. It’s going to be a very weird holiday this year, but I think we can get through it.
- Tell us about something you learned this week.
This week we’ve been continuing to focus on OOP (Object Oriented Programming), and I’m finally starting to feel like I can make my own things with it. I’m learning how to call things to different areas, how to keep values between item classes, it’s all very exciting. It’s a fun, different approach.
2. What are the pros and cons of immutability?
The pros to using this structure is that it can lead to simpler overall programming. Since the data is immutable, it’s always there to reference, leaving you a trail to follow if something goes sideways in your code. You can also compare the new copies of your data to the original, see if anything changed, and make decisions based on said changes.
The cons are, of course, that you can’t change the original data. Not a huge deal in most cases, but it does mean you have to cleverly navigate certain scenarios.
3. How can you achieve immutability in your own code?
Using const to create variables prevents any changes from being made to that particular variable. On the other side of the coin, let would allow those changes. Creating a let based on a const would allow for that original immutability to be preserved.
4. What are Divide and Conquer algorithms? Describe how they work. Can you dive any common examples of the types of problems where this approach may be used?
A Divide and Conquer algorithm takes a large problem, and breaks it down into a series of smaller, more manageable problems. As per the Khan Academy, these algorithms have three parts:
i. Divide the problem into a number of subproblems that are smaller instances of the same problem
ii. Conquer the subproblems by solving them recursively. If they are small enough, solve the subproblems as base cases.
iii. Combine the solutions to the subproblems into the solution for the original problem.
A common example of this is Merge Sort. If you have a large array of items (let’s say numbers) and you want to sort them from lowest to highest, you can do this by Dividing, and Conquering.
Split the array in two halves, recursively sort each half, and merge the halves.
5. How to insertion sort, heap sort, quick sort, and merge sort work?
Well we just spoke of merge sort, so the above has that solution. Let’s talk about insertion sort, heap sort, and quick sort.
Insertion sort starts with one item in a list on the left, and a number of items on the right. You take one item from the right, move it to the left list, and arrange those items. Continue until sorted.
Quick sort works by choosing a more or less random item in the list as your “pivot”. You then sort by “less than pivot” and “greater than pivot” sublists. A new pivot is then put between two lists. Continue until there is a sublist of one item, and the list is sorted.
With Heapsort, you add all of your items into a heap, then pull out the largest item and put it at the end. Repeat until sorted.
6. What are the key advantages of insertion sort, quick sort, heap sort, and merge sort? Discuss best, average, and worst case time and memory complexity.
Insertion sort does not have any real advantages, as it happens. Because you have to sort them one at a time, this naturally takes a lot of time.
Quicksort is a bit faster. and is popular because of it. Using its pivot system, data can be broken down into groups and sorted more quickly. But, if you happen to choose bad pivots, this can slow down the process.
Heapsort is a pretty quick option, actually. by having all of the items in a “heap”, you can pull out the largest (which should be easy to identify), throw it at the end of your list and keep on going. Since it’s all in one heap, you don’t need additional memory to sort it.
As mentioned, mergesort is a Divide and Conquer algorithm. Dividing the array in half, and then again, and then again recursively and merging gives this a massive advantage in terms of speed. Its a bunch of little jobs, not a huge, slow one.
7. Explain the difference between mutable and immutable objects.
So like we spoke about in the beginning, immutable objects cannot be adjusted, only copied and referred to. Mutable objects can be changed, like an array can have items added to it.
8. What are the three laws of algorithm recursion? Describe them in your own words and what they mean to you.
The three laws are as follows:
i. A recursive algorithm must have a base case.
ii. A recursive algorithm must change its state and move toward the base case.
iii. A recursive algorithm must call itself, recursively.
The first law means we have a condition we want to resolve, a problem to solve. That is our end goal.
The second law means that we have to be moving towards that goal with each iteration. Otherwise, there would be no point.
The third law means that until we have solved our problem, we go again. And again. And again. Until we beat the condition.