Goodbye, Object Oriented Programming

https:[email protected]/* *//goodbye-object-oriented-programming-a59cda4c0e53

Other urls found in this thread:

youtube.com/watch?v=uKfKtXYLG78
anyforums.com/
twitter.com/NSFWRedditGif

Who needs OO anyway, it's all bullshit.

Object-oriented programming is like a condom.

Condoms are very useful in some cases. Languages like Java and C# use this as an excuse to make you wear a condom at all times, even when you're going to the store to buy some bread. Even when you're trying to have children - you can just poke a hole in it after all, so you shouldn't be complaining.

Yeah, I wait to see (when I code) until OO seems the right tool for the job -- I never start with it.

Sometimes it's useful -- most of the time it isn't.

OO can be pretty useful if handled correctly.

The problem is, it's not a catch-all miracle solution that people tried to pass it as back in the 90's and 00's.

That's what my professors try to push even now.

what should we use then?
structured programming?

Data oriented programming.

OO has problems but the biggest problem the author had was being a fucking retard.

He got all the reuse he wanted in the previous project. Why would think it is smart to copy entire classes from a previous project, that likely had a different objective achieved with different methodology?

His example with the ArrayCount class was only an example of his incompetent programming. His design pattern was bad from the start, he would have failed whether he used OO or not.

I'm not sure why he thinks this is a point. I understand it's cool to hate OO, but this isn't even a criticism. It's like stating water is wet. Of course a method that uses an object has a pointer to that object. He tries to go on about how we have to clone the object and so on. He's full of shit. He's talking about Java, but Java doesn't do that.

Interfaces are still OO. This is like saying

I wasn't surprised when I got to the end and say

He's a stuck up hipster who wrote a shitty blog complaining about how everything is somebody else's fault. OO is not perfect, but his complaints were all exaggerations or outright lies.

the problem is if you catch yourself doing everything strictly according to OOP you'll end up with even worse and unreadable code than would you make writing big project purely functional

out of the box thinking is required if you want Java or C# quirks to work for you instead against you. some code is way easier for computer and you if you write it like a C brute instead of forcing a quirk like WPF Binding into your program in every step.

OOP patterns like composites are almost essential nowadays too, so completely doing away with OOP isn't an option either

There's no magic bullet. There's no superior martial art. You use whatever makes sense at the time, and whatever fits you the best. Something that works for others might not for you and vice versa. Something that fits one project might spell disaster for another. That's one reason I don't like inflexible languages like Java. They're designed for the benefit of managers and businessmen who want to control the workforce, so everyone is easily replaceable. Those languages enslave you, and dull your mind.

I don't understand his critique of encapsulation... Private variables are a thing.


Don't trap yourself into one paradigm. Personally, I use OO and FP.

Sometimes I even use both in the same project.

This is analogous to "Well you can't do that directly in OO but here is some patterns you should learn that will allow you to do something similar" bullshit.

Wow, what a renaissance man. What is it with hipsters making these lists of supposed professions when at best they have a DeviantArt account or Fanfiction.net account where they posted worthless shit? You see theses list all the time on Twitter, sometime they even add mundane things like "mom"/"dad", or call themselves a CEO or their shitty 3-person startup that hasn't produced anything of value ever. Or they will call themselves "Software Architect" or "Software Engineer" for writing some shitty Javascript web-app by gluing together jQuery and Angular.

Learn every paradigm possible, and then always just use what is right for the job.

Instead of OO why don't we use atribute oriented programming, where you categorize everything by attribute instead of assigning attributes to a thing.

Pretty much guaranteed to be far slower to execute.

Shitty programmers write shitty code regardless of design pattern or language features.

funny that it's always the same problem.

As expected, not a single alternative to the three OOP pillars is given.

OOP shaming confirmed for harmful.cat-v.org meme.

What OOP does is provide shitty programmers increasingly more abstract ways of being shitty. Instead of having to git gud and learn how to write their own string parser, they just plug-and-play one of the 50,000 methods that their bloated IDE lists off without any consideration whatsoever for whether or not the pre-packaged library is going to hurt their performance -- because they have NO IDEA how it works or how it was written.

Most people are shit at programming. Almost all hipsters are shit at programming. Hipsters shit on OO to be rebellious, but because they are shit at everything they have shitty critiques. And without exception the hipster will say "dude use Haskell lmao its like functional no boss man is going to tell me how to work" when he gets blogging after a long day of debugging the coffee machine at his minimum wage starbucks "office". In general, people without ability and talent complain rather than do. That's why people who criticize OO all have the same problem.


Limitations exist in every paradigm.

"Modern" objected-oriented programming is degenerate. It was subverted from the original concept by countless know-nothing engineering niggers who thought they could design proper languages.

...

Anyone else find it interesting that OOP is an exact anagram of POO. Who are the real Pajeets?

And did anyone ever notice that you write object-oriented programs (OOP) in an object-oriented language (OOL)?

OOP in the OOL, pajeet!

yeah totally man, those damn white supremacists telling me I can't fuck dogs

What are you even talking about, retard?
You have to go back
>>>Holla Forums

...

He cherrypicked the few situations that graze the limits of OO and used those as an excuse to drop it entirely. What did he expect, that it would be pure magic? A perfect reflection of the OO theory? If it works in 90% of the cases then any normal developer would consider that a win. If someone drops it over those 10% you rarely encounter anyway, then they're being an idiot.

To keep your car analogy going, his whole argument is: hey, this all terrain vehicle can't drive through the woods! I once had to cross the woods. Fuck that shit, I'm walking everywhere from now on, just in case I come across a few trees and don't feel like steering around.

THAT'S NOT HOW IT FUCKING WORKS YOU FUCKING FAGGOTS

I don't understand Holla Forums's hate for Java's "objects only" approach to OOP. I mean, writing static classes is pretty much using a namespace, which you would end up doing in C++ unless you like living on the edge or planning to go Debian-tier "stable". Yeah, you may have to write a bit more, like the class boilerplate or the "namespace" before each call, but the first one is insignificant (specially if you automate it, which is piss easy to do even without an IDE) and the second one is not exclusive to Java.

There is only ONE situation in which Java's approach to objects is markedly clumsy and stupid, partially but not fully fixed in Java 8. You would know this if you had tried Java outside of a FizzBuzz.

...

...

...

Sure thing, buddy

Here is a simple example. Let's start with a shape and rectangle class
public abstract class Shape { public int Area();}public class Rectangle : Shape { public int width, height; public override int Area() { return width * height; }}

Pretty simple, right? A rectangle is a type of shape, so it makes sense to define it like this. Now here is where it falls apart: a square is a type of rectangle, so it makes sense that it would be a sub-class of rectangle:

public class Square : Rectangle { public int side; public override int Area() { return side * side; }}

Oops, now we have two dead fields and there is no guarantee that that the 'side' of the square is the same as the 'width' and 'height'. What do we do now? Use accessors instead of fields?

public class Square : Rectangle { public int Side { get {return width;} set {width = value; height = value;} }; public override int Area() { return Side * Side; }}

This seems find, until someone receives the square thinking it is a rectangle:

void MakeWide(Rectangle rect) { rect.width = 2 * rect.height;}var rect = new Square(3);// many lines later, or in a different fileMakeWide(rect);

Now the square will produce the wrong area. We can work around it by making the width and height accessors as well and overriding those:

public class Square : Rectangle { public int Side { get {return width;} set {width = value; height = value;} }; public int Width { get {return Side;} set {Side = value;} } public int Height { get {return Side;} set {Side = value;} } public override int Area() { return Side * Side; }}

This will ensure that both fields get updated when only one of them is assigned. But now the user will find himself baffled that trying to make the "rectangle" wide will make it four times larger instead.

The whole "is-a" concept is a neat explanation, but it only works when the analogy leads from abstract to concrete. This is why you can have a hierarchy like animal -> mammal -> dog -> labrador, the classes get more and more concrete at every step and information is only added. In shape -> rectangle -> square on the other hand information is lost and additional invariants are added.

OOP works perfectly for Shape-Rectangle-Square. There is no wasted information. Try harder next time.

No. Actually it doesn't. always the same examples, straight from the favorite blogger

First off, get that "OO must mean it must always be is a perfect reflection of our understanding(!) of reality" theory out of your head and learn to use it how it's works, which is sometimes a little different than you'd initially expect. It doesn't always match 1:1 with what some perceive as what was promised to them (that it would be exactly like reality!).

Even if it doesn't work like you thought it should in certain cases, than that is still no good excuse to drop it entirely, which people seem to want to do. Similar reasoning would be: Hey, I notice some things off about quantum theory. Therefor: back to religion! That's just stupid.

As if it would all be so much better in functional programming. It would be worse. A garbled mess of different functions with lots of ifs and cases you need to wade through. No thank you.

But this is still just OO programming, not reality. OO tries to get closer to reality. Nobody said it would be exactly like reality, that's what some people just keep thinking for some reason and using as arguments. OO works great in most cases and if you encounter one of those cases where it doesn't, you just work around them and still have cleaner sources than you would have with functional programming.

Anyway, to get back to the example for a sec: First question: when are you ever going to need a Square object as a separate class from a rectangle anyway? What is the logic behind that? As an OO architect you need to ask that first. A square is only a special case rectangle, it doesn't require a whole other class. Even in reality it isn't a different kind of object. It just a special case rectangle with a special name. A leap year is a special case year and it also doesn't need a special class. What's the point? What logical reason do you have to build it like that? You can just add "isSquare()" or "isLeap()" or something: Done! That's how you do it.

TL;DR: This example is just another one based on bad understanding of object orientation in programming. Don't use OO if you don't like, or get it, but stop with the bullshit arguments hoping to prove it's bad when it's not.

it's what money as a motivation does. they make their shitty blog and then they get ad revenue from the blog so they say to themselves, "NOW I'M A PROFESSIONAL!"

Yep exactly, fuck those functional retards and their garbled code!

[code]defmodule Geometry do def area({:rectangle, a, b}) do a * b end def area({:square, a}) do a * a end def area({:circle, r}) do r * r * 3.14 endend
[/code]

god damnit
defmodule Geometry do def area({:rectangle, a, b}) do a * b end def area({:square, a}) do a * a end def area({:circle, r}) do r * r * 3.14 endend

What about that is so much better than ?

haha I think that's perfect.
OOP seems so obsessed with hiding difference that it would prefer bizarre constructions than having different functions for different data. Even though that's exactly what you create with the complex structure.

It's not java.

The last stand of a shitter, criticize java rather than OOP design itself.

ok, kid.

It's clearer, less verbose and more straightforward.

All of those are language implementation details. We're talking about from a design standpoint. I can make a verbose and obfuscated set of functions in Haskell, that isn't a mark against functional programming, just against Haskell. On a side note, I'd argue defining square as rectangle(a,a) is more straightforward, but it's irrelevant.How is the functional approach to this problem a better design than the OOP approach?

Yes, and? Java is shit and that you should avoid it.

But you don't have to.

In >>647317's case, it seems less concerned with taxonomy and more with functionality. It also doesn't waste a data member trying to fit the square into the rectangle model, though it doesn't preclude creating a conversion function if it suddenly became necessary to see the square as a rectangle.

There was no "trying". It fit simply and easily. If you didn't want to define it separately, you just create a rectangle with (a,a). It's no different than in the functional design in that regard.

That's the biggest joke of a statement I've ever heard.

There was. Categorizing the square as a rectangle may or may not be useful, and that depends entirely on how these entities are used in the rest of the program. Also, it did waste a data member.

Not an argument.

Then it's no more of a waste than >>647317's definition. Not a criticism of OO design.

Pretentious bullshit isn't an argument.

It's more of a criticism of how OO design inevitably encourages people to think:
>dude, a square "is a" rectangle! A circle "is an" ellipse! Therefore inheritance!
It would be great if OO programmers followed the Liskov substitution principle strictly, but then OO design would suddenly not look as intuitive.

That's the point I was trying to make. started the topic by saying basically that, then came in with "that's how everyone teaches it, so it must be right" and then I provided an example where shoving real-world "is-a" taxonomy into OOP does not work. I never said that OOP itself is shit.


You still lose the square invariant that two sides must have equal length. There is nothing in your code maintaining it and Square is really just another name for Rectangle.


This is more like it. What language is that? It reminds me of the Common Lisp Object System with its defgeneric and defmethod, but that's obviously not Lisp.

Elixir, a language I've been playing with for a while, it's only a year or so old. It's built on top of the Erlang VM and aims to eliminate a lot of Erlang's cruft and make it easier/more productive to write. Its syntax is somewhat ruby-like but the similarity stops there. It's primarily functional and quite nice to work with.

Erlang was originally used for telecom systems, so it's highly concurrent and fault tolerant by design. Because mutexes and semaphores are shit.
youtube.com/watch?v=uKfKtXYLG78

No, you do not. If you create a square object it is defined as being two sides of equal length. It's just convenient to rely on Rectangle for code because square has no need for unique code.
That has two equal sides. Yes, just like in real life.

OO is a niche tool for specific uses, no more, no less. It's wonderful for, say, vidya games, but for most other tasks it just makes data flow hard to visualize.

That's only true if the rectangle is immutable. Otherwise, methods that modify the member variables will need to be redefined in the square class to preserve the invariant, and even then, those methods in the context of a square might be questionable (e.g. a method called stretchWidth() that also changes the height). At that point, you might as well not use inheritance.

In general OO design encourages immutability. In the case of stretchWidth() I don't see a problem using the rectangles version and returning a rectangle, since that's what it would become. If you also stretched the height to preserve the invariant your code would be misleading or lying to anyone using it.

?
Are you confusing OO design with functional design? OO design encourages you to modify your objects, hence the use of "setter" methods. It's fine with me if you do object oriented-like stuff with immutable objects, but most people don't.

It's a method that modifies the member variables; it doesn't return anything.

In what wold is that true?

I'd advise against that. It also conveniently bypasses the problem.


Maybe I should clarify. Students are taught to use immutability in general. There are cases where you don't want it, but if you can make something immutable, do so. OO design was a bad way to word it, but the philosophy of people using OO is generally immutability is better. Specifically for the reasons you think of, with mutability it greatly enhances the difficulty of creating bug free software.

Really? I'm not too sure about that. I guess it depends on the school?