Complex Networks: Generate Simple to Chaotic Real World Networks

An explanation and tutorial using python and NetworkX.

Complex networks are absolutely everywhere, that’s what makes the field of network science so interdisciplinary. The relatively new discipline was formalised around 20 years ago as a domain in its own right. Its applications span areas including sociology, communications, biology, physics and computer science to name a few.

In this post i’ll give you an introduction complex networks, run through some basic ways of describing complex networks and attempt to model a real life complex network in Python, strap in.

What exactly are complex networks?

Before we start digging around in the weeds, let’s try and define what a complex network actually is first.

“A complex network is a graph (network) with non-trivial topological features”… “(Features that) often occur in graphs modelling of real systems”

— Wikipedia

Examples of complex networks
Examples of complex networks

Complex networks are nothing more than graphs on steroids. What makes complex networks complex is the fact that its subgraphs have varying degrees of randomness and uniformity baked in. This reflects the chaotic myriad of interactions we see arise through natural processes in the real world. You might be thinking, “hang on a minute, but these networks might as well be random?”. They may seem that way but there are universal patterns can we find. We can also apply graph analysis on them to get a clearer understanding and we can generate artificial versions of them with Python. So let’s do that.

Why try and model complex networks?

Understanding and modelling complex networks lets you do cool, useful and ground breaking things. Here are some examples of questions you can answer:

  • How long will it take to send data from a Google data centre to my smart phone?
  • How to find authoritative websites like the BBC site?
  • What parts of the brain correspond to which neurological disorders, e.g. cluster migraines?
  • How can we assess the economic value of creating new shipping routes between ports?

Great, but why do we need to model the networks? Can’t we just observe the network and store it in some data structure like an edge list? Then run analysis on that? Well, not always. In some cases we can’t directly observe and put the network into a computer because:

  • It’s too big
  • Can’t observe it’s micro structures, often the case in biology
  • Not enough data
  • It’s constantly evolving

When this is the case the best we can do is approximately model the network and run analysis, prediction and simulations on that instead. That’s where the fun begins.

Some Graph Theory

But first, we most understand large complex networks on a micro scale.

ER Random Graphs

Let’s start with some basics — how to generate randomness in a network. The ER (Erdős–Rényi) model is the simplest mathematical model, so let’s start there. The ER models makes things easy for us because it makes some very helpful assumptions:

  • Firstly, that there is an equal probability that any given two nodes in the graph will contain an edge.
  • Secondly, that the number of nodes is fixed.
  • Lastly, that there is independence between any two node edges. The existence of an edge does not increase nor degrease the likelihood that any other two nodes will have an edge.

Below our ER(n, p) method creates a networkX graph object which we can visualise. The method for assigning edges between n nodes trivially generates a random number between 0.0 & 1.0 for every possible edge, if the random number is less than p (probability of an edge between two nodes), then an edge is assigned.

Random Graph
(Image by Author)

When we assume a graph with 10 nodes and 40% chance of any two nodes having an edge, we get the following graph. If we run the code again, we’ll get another graph with the same number of nodes but most likely a different configuration. Perfect! We can now generate random networks.

Degree Distribution

A useful way to aggregate/summarise a network is with degree distribution. A given nodes degree is the total number of edges it has. Therefore the degree distribution is the frequencies of which each degree occurs in a graph. (Degree tally)

So what’s the shape of this distribution for our random graph?

To begin with, the chance of picking a node at random and it having X number of edges follows a binomial distribution. As n (number of nodes) increases the degree distribution tend towards a Poisson distribution. Let’s see this in action as we increase the number of nodes with our ER() method while keeping the probability of edges the same.

Degree Distribution (Left) 10 Nodes
Degree Distribution (Right) 1000 Nodes

Uniformity vs Randomness

Regular uniform graphs are the opposite of random graphs. These types of graphs such as lattice or the below circular Ladder graph have edges that are explained by a finite set of rules. Both graphs below have the same number of nodes and roughly the same number of edges, however very different degree distributions.

Random graph with 30 nodes & Degree distribution
Uniform graph with 30 nodes & Degree distribution

Before we start modelling some more complex networks, let’s run through some more basic network properties. For this we’ll be using networkX again, so let’s create our random graph as G .

import networkx as nxG = ER(30, 0.1)

Network Density

How density in a network measured? A network with an edge between every node is called a complete graph, or has maximal density. What percent of the number of possible edges are actually are there? This fraction is called network density.

Complete network with 30 nodes.
Complete network with 30 nodes.

Here’s how you can calculate the network density in Python:

A quicker way would be to use networkX’s built in function:


Network Connectivity


The network connectivity is measure of how robust a network is. A network is connected if there is at least one path from every node to every other node. What’s the minimum number of nodes I need remove to separate a network? That’s the simplest measure of network connectivity.

Take the example of a rail network:

“How many stations need to be out of action for it to be impossible for commuters to travel between any two stations?”

Average Shortest Path


A measure of the efficiency of information/mass transport on a network. Calculate with the following:

Calculate the shortest path for all possible nodes pairs then average these edges to get the average shortest path for the whole network.

Network Diameter


Given we have a list of all the shortest paths between every two nodes in a graph, if we then take the longest shortest path and count how many edges that path contains, that’s the network’s diameter!

Average Clustering Coefficient

clustering_coeffs = nx.clustering(G).values()
average_clustering_coeff = sum(clustering_coeffs)/len(clustering_coeffs)

A higher clustering coefficient indicates a greater ‘cliquishness’. Think of this like “how many of a nodes friends know one another?” This is a useful metric for describing the extent of “local densities” — pockets of community structures inside complex networks.

Random vs regular circular ladder graph

Let’s see how well our random graph squares up against the regular circular ladder graph…

Regular Circular Ladder Graph compared to Random Graph
Regular Circular Ladder Graph compared to Random Graph

The random graph has a higher average clustering coefficient which varies between sub graphs and nodes inside it. Therefore, it’s well clustered in some parts but weakly in others. The random wiring in this case happens to produce a better average shortest path on this network, indicating it’s better at propagating information/signals through it than the circular regular graph.

By virtue of the nature of our circular ladder graph, there are no “cliques”, so it scores a zero on avg. cluster coefficient. Our circular ladder graph however is more “robust” as you would need to remove 3 nodes (vs just 1, node 19 in the our random graph) for the graph to be separated.

Overall, a little bit of random noise in our network is just ok in all aspects, however regulars graph will reliably perform well on particular properties they are optimised for. These well performing properties come at the sacrifice of other potentially important network properties. In our example, the circular ladder graph is super robust but terrible at propagating information as a result.

This begs the question, How does mother nature architect the complex networks around us?

A real life network

Random graphs are fantastic because they let us model complex networks through parameterisation and introduce a probabilistic element, however they are rubbish at modelling things useful in real life. In real life there’s just too many interactions, too many types of causal effect…too complex. So how can we find a patterns in the chaos of naturally occurring systems? How do we find the golden rules that all these real life systems follow? We’ll attempt to answer this question by using some real world data and writing some code.

For this example, we are looking at publicly available UK southern rail schedule data. I’ve cleaned the data up and filtered to only routes that operate on the UK Southern Rail network (to make our lives a bit easier), here’s what the data looks like.

Raw network data
Raw network data

Now let’s visualise the network! We can use networkX’s spring layout.

import pandas as pd
import networkx as nx
import matplotlib.pyplot as plt
plt.figure(figsize=(30,20))data = pd.read_csv("southern_rail_data.csv")

edges = [
    list(row) for row in data[
        ['origin_station_name','destination_station_name', 'time']
    ].values if list(row)[0] != list(row)[1] 

nodes = list(set(data.origin_station_name))
Gr = nx.DiGraph()

for e in edges:
    Gr.add_edge(e[0], e[1])nx.draw_spring(Gr,
               #width= data.time * 10,
Image for post
Southern Rail Network, (Image by Author)

An important side note: This rail network has been created in the real world, so has been influenced by a number of constraints including geographical limitations and population densities.

Small world property

The first and most important property real life networks follow the principle of “small world”. Most commonly observed in social networks, it’s the idea that most people in the world are connected by only a few degrees of separation. You may know someone, who knows someone, who knows, say Barack Obama, or Kevin Bacon. In network science talk, there’s a low average shortest path. This occurs as a result of subsections of the graph having small clusters loosely connected to other clusters. In these networks we also see relatively high clustering coefficients.

This topological feature makes it east to get from one point in the network to another. It’s so important that we see it in all complex networks. It’s what gives us instant reflexes in the neural-nervous system, it’s what makes the internet possible and it’s what keeps our lights thanks to the national power grid. In short, mother nature loves it.

Our southern rail network has an average shortest path of 6.23 and average clustering coefficient of 0.29.

Let’s see if we can model something similar to the Southern rail network using our random graph generator. We can keep the number of nodes the same, 219 (stations). The average degree of the network is 3. We can can use this to approximate a random graph with a similar average degree.

n = 219
p = 0.03
G = ER(n, p)nx.draw_spring(G)
Image for post
Random Graph with 219 Nodes

This random graph gives us a better average shortest path of 3.18 however a worse clustering coefficient of 0.02.

There’s a problem however, degree distribution. If we want to effectively model the network, the degree distribution needs to match.

(Left) Southern Rail Network Degree Distribution
(Right) Random Graph Degree Distribution


The Watts-Strogatz model is a simple method to generate graphs with the small world property.¹ In this model we start with a lattice graph, through randomly rewiring the edges, it forms a new graph. The new graph satisfies the small world properties of low average shortest path and relative high average clustering coefficient.

We can control how dense our graph will be through adjusting the global parameter used to control the, “probability of rewiring each edge”.

WSG = nx.watts_strogatz_graph(219, 7, 0.4)
Image for post
Lattice graph with 219 nodes
Image for post
Lattice graph rewired (40% chance of rewiring)

Look’s better, however our degree distribution still doesn’t match! Looks like we are going to have to something else!

Left: Southern Rail Network Degree Distribution
Right: Watts Strogatz Graph Degree Distribution

Scale free networks

The last stop on our network modelling adventure is at scale free networks. This property existing in almost all large complex networks. Here are the key features of this property.

  • Degree distribution follows power-law distribution
Image for post
Southern rail network’s degree distribution is more like power-law. (With the exception of a handful of stations with 1 degree as they are where rail lines terminate.) (Image by Author)
  • Fractal like — zooming into subset creates same shape as distribution distribution
Image for post
  • 2 < γ < 3
  • 80/20 rule :~ 80% of nodes have degrees with-in bottom 20% of all degrees (Pareto principle)
  • Long tail of nodes with high degrees
  • Diameter of a growing scale-free network might be considered almost constant in practice
  • Is super robust!

The key takeaway with scale free networks is that it’s fractal like by nature. A glance at the whole network vs zoomed into one of its subgraphs, will look look much the same topologically. Meaning no matter how much it grows, it will retain a relatively short average shortest path and the network diameter won’t drastically increase.

Just as luck would have it, scale free networks can be generated through a universal phenomena we see in nature and society, the idea of preferential attachment. Or how my university professor would put it, “The rich get richer”. Others may call this idea, “The law of attraction”. However you want to characterise it, we can map the idea to network science in the following way:

The probability of a node having a new connection, is some how proportional to the number of connections it already has.

So nodes with high degrees accumulate new connections faster than nodes with lesser degrees. Popular people make new friends faster than unpopular people. New stations pop up and connect to already large stations. This phenomena is everywhere!

Barabasi-Albert Model

The Barabasi-Albert model puts preferential attachment into practice to generate scale-free networks. (Nodes are proportionally more likely to have edges to nodes with higher degrees.) Let’s create one again with NetworkX. Again number of nodes are 219. The new parameter controlling, “the number of edges new nodes will attach to”, is set to 3.

SC = nx.barabasi_albert_graph(219, 3)

The BA network finally generates a graph that best resembles the Southern Rail network. The key attribute, degree distribution better matches, with the exception of nodes/stations in the end of the rail line that only have 1 degree. This however, is a nuance specific to the rail network. So let’s assume we’ve achieved our goal, hurray! We’ve modelled a real life chaotic network!

Comparison of model generated with BA method vs Actual Network

Comparison of model generated with BA method vs Actual Network


[1]: Recommender Systems Research: A Connection-Centric Survey — Scientific Figure on ResearchGate. Available from: [accessed 29 Dec, 2020]