back-end development is all about using microservices
What you mean to say is that "blogged articles about back-end development all talk about microservices."
making loosely coupled services that can be deployed seperately
To be able to be deployed separately, the services need to be coupled very tightly to their interfaces. E g, if service A changes its interface, every client of that service would break. I don't call this "loose coupling," even though perhaps the "make" command that builds service A versus service B may run in different code checkouts or whatever.
how microservices are connected with each other
There are a few things to unpack there.
First: What pieces go into separate services? Generally, you want a piece that's developed together, and tested together, to be deployed together -- if the four people working on it (or one person working on it!) are sitting right next to each other and talk to each other all day, you don't need to have four separate services; coordinating the union of what they do into one service is often sufficient.
Second: Once you have separate services, there's the question of "service discovery." Generally, this uses some form of DNS -- the client service makes a DNS request for "service X" and the orchestration fabric returns "that service runs on host Y on port Z." SRV records are good for this, although some systems just say "all services run on port P" and allocate one virtual IP per service instead. (Part of the name "service X" also implicitly means "implementing interface A over protocol B.")
Third: Once you have the service discovery bit down, and set up some DNS service to answer the questions for service discovery, you need to figure out how to horizontally shard the services, assuming your load will be large enough that a single instance of the service will not be enough. You can use anything from DNS round-robin, to shared consensus systems (RAFT, Paxos,) to true stateless services with persistent data in network-attached RAM and storage (Memcached, Redis, various databases, etc.)
Fourth: Once you have all of this working, you need some kind of networking fabric that can make sure that IP-based connections go to the right place. This is where various networking plugins like Flannel or high-level routers like Istio come in -- or just a simple set of haproxy or nginx servers where you simply map incoming requests to back-end nodes, without necessariy needing the containerization mechanisms.
However, take a step back: Do you really need this? Microservices solve exactly one problem: The problem that you have more engineers working on a single application, than you can reasonably coordinate using simpler means. Thus, split your hundreds or thousands of engineers into smaller teams, each of which gets some defined resources (CPU, memory, database, networking, etc) and are told to solve specific problems (friends lists or new user flow or email reading or whatever) and publish an API/interface that they then have to stay true to for the foreseeable future. This adds tons of additional overhead in the management of development, and adds significant impediments to quick development, but is necessary to manage development at all for large organizations. If you have a small organization, all that extra orchestration and management will end up just sucking up time from your scarce development resources, and you're probably better off just writing one or two "main services" that "do the things."
Defining interfaces, explicitly or implicitly, may still be a good idea -- almost every game or persistent service of note use some kind of IDL to define their packet data units and requests and sessions. The question is more what hurdles are in the way when you change something. If you can just re-compile everything that depends on the changed interface, and deploy them all in one swell foop, that'll let you move much faster than if you need to stay true to previous versions for some extended period of time.