MPI is designed for high performance computing, as tasks manage their own personal memory sections, instead of the standard model of sharing.

MPI relies on messages passed between tasks if communication is desired

int MPI_Send(const void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm)
 
int MPI_Recv(void *buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status *status)

Communicators represent rules that define which processes can communicate with what, and tags are used to match specific senders to specific receivers.

Many different distributions often designed for a specific high-performance purpose

This makes it quite portable, which means it can run on most systems regardless of the memory system used. However the API is quite complex and therefore has a large overhead in design and in programming