One simple way to interpret OOP is to look at the language features that are commonly called object-oriented. Pierce (2002) explains in Types and Programming Languages:
Perhaps the most basic characteristic of the object-oriented style is that, when an operation is invoked on an object, the object itself determines what code gets executed. Two objects responding to the same set of operations (i.e., with the same interface) may use entirely different representations […]. These […] are called the object's methods. Invoking an operation on an object [is] called method dispatch.
By contrast, a conventional abstract data type (ADT) consists of a set of values plus a single implementation of these operations […].
This particular kind of polymorphism based on dynamic dispatch forms the centrum of many object-oriented techniques. In particular, all the (original) design patterns as described in the Design Patterns book leverage dynamic dispatch to elegantly solve a multitude of problems.
Using this understanding also allows us to cleanly distinguish OO from other paradigms.
E.g. typeclasses in Haskell do provide a kind polymorphism that is indistinguishable from OO in simple scenarios. However, the type class is conceptually and practically distinct from the values it operates on. There is dynamic dispatch, but it is the execution context and not the object itself that decides what code gets executed. More complex examples (like a “heterogenous list”) demonstrate that type classes do not provide object-like polymorphism but are just type constraints. It is possible to get some degree of OOP-like polymorphism by using existential types for type erasure, but that still lacks the subtyping known from the mainstream statically typed OOP languages.
Using this interpretation of OOP is quite useful in day-to-day programming. Using polymorphism is a great way to solve certain problems. However, it misses out on any deeper insights.
See also: (wrong) interpretations of OOP.