Container for all Biarri Networks Blogs

It’s a simple question ­how much spare fiber optic capacity should be placed in the network and where?

It can have a massive impact on your costs,­ too much and you’re wasting capex, too little and your opex costs go through the roof. Coming up with the right answer is not as easy as it might seem. The question is not how much fiber optic capacity do we need long term, but how do we get the capacity when we need it?

The Main Mistake: The wrong amount of spare fiber optic capacity in the wrong spot

Take a hypothetical simple 3 tier home run FTTP network (no splitters):

  • At first glance, it looks like we could have 12x DPs (Demand Points) per MPT (Multiport), and 12x MPTs per FDH (Fiber Distribution Hub).
  • However, the expected average take-up is 50% so that means up to 24x DPs per MPT.
  • There is also an expected growth in DPs of 20%, so 20x DPs per MPT (24 / 120%) is possible. This places all the spare capacity at the MPT which will mean every new DP has only a drop to build so minimal construction and an overall minimal build.
  • But the growth in DPs isn’t evenly distributed. Half of the growth is new clusters of 20 DPs, and the other half is scattered throughout the existing DPs. Now, the answer might be 22x DPs per MPT, and 11 MPTs per FDH.
  • It is expected to take a number of years to achieve that take up. So if we take the time value of money into account, it will be better to delay some build until it is required. Now the answer might be 24 DPs per MPT, and 10 MPTs per FDH.
  • However, the distribution of take up is highly variable. Some MPTs will fill up much quicker than others and so then the answer might be 20 DPs per MPT, and 10 MPTs per FDH.

As you can see, as more pieces of information get added to the calculation, the optimal distribution and quantity of spare will change. Add to this the other factors discussed above and it’s easy to see why some carriers don’t even attempt to run a model and thus make so many costly mistakes. Being able to model these variables and run simulations becomes a powerful tool in identifying how much spare capacity to place at each point in the network. It can also help understand what the cost implications are for different sparing rules, both for short and long term costs.

The other aspect to consider is the cost difference between the minimal capacity option and the higher capacity option. For example, when choosing between a 144f cable and a 288f cable:

  • Cost of 144f cable now = $20/m
  • Cost of a 288f cable now = $21/m
  • Cost of a future second 144f cable = $10/m (Net Present Value)

If there is a 20% chance that a 144f cable will be insufficient long term, then the cost of installing a 144f cable first is $20 + $10*20% = $22/m, therefore the architecture is better to specify that a 288f cable is required.

How can we overcome these challenges?

As stated above some carriers don’t even attempt to run a model and thus they end up making a lot of costly mistakes.

Through automation and optimization across FTTx network design, we can re-evaluate how to simulate fiber optic capacity across a network. With detailed designs running at the core of a simulation, you can be far more confident across long term growth and ongoing strain across your fiber network.

Want to see how we do it? Why not schedule a workshop today?

A 2013 study on Australia’s National Broadband Network, 3 years into the rollout, found that capex savings of over 25% on the FTTP rollout could have been easily achieved ­ with no flow on negative effect ­ simply through changes to the architecture. This national effort aims to bring robust broadband to all of Australia, and has extensive urban and rural components. A further 15% of savings were identified but not pursued as it may have caused an increase to the operational expenses. Below are a number of items that, if well planned, can make a tangible difference in the success of any FTTx network rollout.

Mistake 1: Only looking at homes

Many FTTx networks begin with a goal to serve homes. But getting to those homes means passing a whole range of other locations with revenue potential. Maybe they won’t fit into the business model, but not considering a full range of opportunities puts an FTTx network at a disadvantage from the start. Some customers want high bandwidth services, some want multiple services, and others want just a basic broadband service, but a well designed architecture will be able to cater to any and all of them without adding much or any complexity.

Mistake 2: Assuming all Multi Premises Sites are much the same

When we first think about sites with multiple premises we think of apartment buildings, but that is just one type. If you try to service a business park using an architecture made for apartments, you are likely to have a network that simply doesn’t match the requirement. Recognizing the variations that exist will help you develop a flexible architecture.

Mistake 3: Simplistic Modelling of network and demand

Draw an “average” street and you can come up with an architecture that achieves perfect utilization. Put that same architecture against a real suburb however and you may find it consistently fails to deliver what it is supposed to. Any proposed architecture needs to be run against a simulator to see how conforming designs respond to different densities, take up, existence or otherwise of duct and aerial infrastructure.

Mistake 4: Underestimating the cost of complexity

“It’s just one more tier in the network”; “It’s just one more technology in the mix”; or “It’s just one more service offering”. They may seem innocuous, but the cumulative effect can multiply. Not all complexity is bad, but recognizing the cost is important in determining whether to proceed.

Mistake 5: Assuming staff will follow instructions

Human behavior can be unpredictable, but one thing is certain, people will sometimes fail to read or understand or follow instructions. The result is delays, rework, and poor quality, which can happen at each stage of the process. But what does this have to do with architecture? There are a number of things that can be done in defining an architecture that minimizes the frequency or impact of these instances all the way through the chain.
Before looking at specifics, it is useful to understand the causes of this behavior. It’s not that people are malicious or careless — usually.

Mistake 6: Not planning for change

In our industry, the only constant is change. Our challenge is that fiber equipment such as cables and splice joints have an expected life of 20+ years, so how do we ensure we can get 20 years of valuable use from it? We can’t predict what will happen in one, five, or ten years, let alone 20 years. Nonetheless, ignoring the potential for change is a recipe for disaster.

How can we overcome these challenges?

Old methods of planning and designing FTTx networks can no longer support the complexities of FTTx deployments. We need to adapt with new technologies, compare the real underlying costs from plan, design and construction through to maintenance and we need to be able to be certain, to the best of our abilities that we’re making the right decisions.

Data. It seems every network project I’ve encountered has been stuck at this stage. As a design software vendor we are very attuned to the importance of consistent and comprehensive data (software tends to complain loudly and relentlessly if data is poor), but typically my customers are already aware of the challenge. The planning team have clutter in the address set, the design team have a duct (conduit) set in three map projections, and the construction team discover the pole file simply isn’t true.

The underlying problem is an incorrect assumption. Just because the data is useful for its owner, doesn’t mean it will be useful for you.

Not simply an ETL problem

The typical approach is to treat data acquisition as an ETL activity (Extract, Transform, Load). We will extract the data from the source database, transform it into the format and structure we need, and then load into our database. The recurring quip at Biarri Networks is “But it made sense when we drew it on the whiteboard.” The solution lies in ‘intelligence’ during the Transform stage, but before solving we need to understand the problem.

Think like an Historian

Where did this data come from and how did the owner make use of it? And critical in this question is whether they had a human-in-the-loop (HiL) process when they used the data.

Let me explain by way of example.


Consider an address set provided by a postal service. A major irritation is when there are duplicates in the data set such as the addresses to be served. Sometimes these can be perfect duplicates such as two rows with an entry for 123 Main Street. Your data team have probably already configured the ETL to find and resolve these.

Other times the duplicate can be more subtle such as 123 Main Street and 123 Main St (note the abbreviation). Or it could be a compound addresses such as Unit 4 123-127 Main Street. From the postal service point of view, the duplicates are harmless; they know there will only be one letterbox for the mail to be delivered to. A human will resolve the ambiguity. For the fibre project it is a different matter. By not eliminating these duplicates you risk over servicing the premise at Unit 4 123 Main St with multiple fibre allocations.

Internal inconsistencies

There was never a single source data set. Just because you were handed a single file, doesn’t mean it was created that way. Mergers and acquisitions may have resulted in a mega data set that is similar in format and structure only.

A classic case is land parcels which are typically defined by polygons. While the notion of discrete parcels with discrete owners is a cornerstone of our economy, there are numerous ways to represent the information often with different jurisdictions applying their own policy. One may represent sub-divisions as two distinct polygons overlaid on the original parcel – resulting in three overlapping parcels. Others may remove the original. And in more complex subdivisions there may be the private parcels and the common owned parcels interlocking like a jigsaw, all overlaid on the original parcel. Town planners and architects will be familiar with the local policy and so from their perspective the data is consistent and usable, but if your project spans jurisdictions you should expect each to have generated their data in a different manner. Unless you can detect and resolve these you risk process churn and confusion in your team.

Missing and erroneous data

Where a third party provides infrastructure data such as poles or conduit, we frequently assume the data will be correct and complete. For instance the geometric path of the conduit will match the physical world.

This ignores the maxim “If it ain’t broke don’t fix it”. When that third party built the infrastructure they needed reasonable data, but once in operation they have little interest in maintaining it, other than when a fault occurs. So if their infrastructure is stable, and the rare times there is a fault the service agent in the field uses initiative to fill in the gaps, the third party has no incentive to maintain their data.

Unless you establish a process and culture that is tolerant to these errors and omissions you risk creating a system that spends more time in feedback and rectification than in progressing the build of your network.

Free data. Sometimes it is the gift that keeps on taking.

When people discover that I work in telecommunications, helping some of the biggest fiber optic projects around the world, they invariably ask me why the projects are much harder than expected.

That got me thinking, what is it about the fiber projects that makes them so hard, and are they unique?

I’ve spoken before about the inherent complexity of the network and the tens of thousands of elements that connect a suburb, but here I want to talk about the project.

I think all engineering projects have their challenges, but what sets fiber apart is that it shares complexity across numerous engineering disciplines.

Each street is the same but different – Automotive process engineering

Just like manufacturing a car, the building of the network requires repetition of activities in each suburb or street, however there are differences in each. Just like a customer may order leather seats or some other option on a car that is otherwise identical to the next one on the line, each fiber area will be assembled with the same equipment but differ. One may have a university that requires special handling, or a major road that will be difficult to build across, or a new development with multiple entrances. Each will require variation that must be decided in consultation with several experts, and then tracked to completion.

Working with the public – Road engineering

Car manufacturing occurs in the controlled environment of a factory, whereas fiber projects occur in the public space. So in that sense fiber projects share many challenges with a road project. Wherever the network is to be constructed, residents need to be informed, traffic control implemented and various permits for acquired. But unlike a road, the fiber has to be built into each house, requiring coordination with each private land-owner.

Not knowing what will be found when digging down – Civil engineering

While the network is fiber, the bulk of the activities, and source of project risk, is in the ‘civils’. Where a network is installed underground, new conduit must be laid either by trenching or boring. In either case it requires digging through ground. While you may have geologic surveys that tell you the soil type, you never know where the rock that will stall you is.

Progress can’t be seen with our eyes – Wireless engineering

On most civil engineering projects there is ‘line of sight’. Stand in the right place and an experienced manager can gauge the progress of the project. The frame to 10th floor is complete, or the bridge footings are in place. A fiber project is distributed and either underground or, if done well, unobtrusive. A manager is entirely reliant on data to gauge the progress of the project. There is no easy source for gut feel. In this sense it like a wireless network where you are reliant on accurate data to tell you what can’t be seen.

Tie it all together virtually – Network/software engineering

Which leads to the last challenge. The physical network is operated virtually. So accurate records have to be mapped to software that manages customers, billing and the data traffic. And this needs to be ready when the wireless, civil, road, and process engineering challenges are sorted.

As you can see, while the goal of the project is to get an 8 micrometer strand of glass to each house (and then shoot light down it), there is plenty of complexity to be addressed.Engineering Industry Value Chain

I’m sure each engineering project is hard, but it seems this combination for fiber projects makes them unique. I take my hat off to the folks who coordinate all this.

With such a strong driving force behind Fibre to the Home (FTTH) connections worldwide we thought it was about time to join the community.

Biarri Networks would like to announce our partnership with the Fiber to the Home (FTTH) Council Americas. The FTTH Council has strong ties to the development and deployment of FTTH connections throughout the United States and around the world including Asia Pacific, Africa, Europe, and the Middle East & North Africa.

We know that we can help deliver the FTTH Council’s mission of accelerating the deployment of all-fibre access networks; sooner, more efficiently, and at a lower cost.

Biarri Networks uses FOND, our patented software, to create the lowest cost fibre optic network designs. We have been able to demonstrate; an 80% reduction in design time, a 20% reduction in material costs, and can design 3000+ homes in minutes; over industry standard fibre design approaches.

Our team are made up of the brightest mathematicians in Australia and around the world meaning that through our consultancy services you know you are in the right hands. Whether you are formulating how to connect your business, or community or trying to build a cost benefit analysis, we can help.

 Who is FTTH Council Americas?

The Fiber to the Home (FTTH) Council Americas is a non-profit association consisting of companies and organizations that deliver video, Internet and/or voice services over high-bandwidth, next-generation, direct fiber optic connections – as well as companies that manufacture FTTH products and others involved in planning and building FTTH networks. The Council works to create a cohesive group to share knowledge and build industry consensus on key issues surrounding fiber to the home. Its mission is to accelerate deployment of all-fiber access networks by demonstrating how fiber-enabled applications and solutions create value for service providers and their customers, promote economic development and enhance quality of life.

Who is Biarri Networks?

Biarri Networks is an Australian network optimisation company that operates globally. We use mathematics and the latest operations research methods in order to build powerful software to create the lowest cost fibre optic network design. We work with communities, businesses and governments to empower better decision making within their fibre optic network rollouts from building a cost benefit analysis, to integrating fibre rollout plans into FOND.

Begin the discussion and see how we can help design the lowest cost fibre optic network design

Many companies around the world are upgrading old copper networks with technologies such as FTTN, FTTB or FTTP, pushing fibre deeper into the existing networks.  Australia’s NBN Co. are taking on this challenge right now, and have committed to rolling out a mixture of FTTN, FTTP, FTTB/dp, fixed wireless and satellite, and upgrading a HFC network.
Fibre connection comparrision

But how are they deciding how far to push the fibre? Whether the network is rolled out from scratch or is an incremental rollout, the problem that needs to be solved is very complex. There are many factors to consider, the most significant of which are:

strategic fibre roll out

Other factors which should be considered, but add even more complexity, are:

Deployment prioritisation

How to prioritise the deployment to maximise revenue, subject to local constraints of construction staff availability? If other factors such as socio-economic demographics affect the prioritisation strategy, how can they be taken into account?

Upgrade path

How important are upgrade path considerations? Is it important to have an upgrade path that is as cost-effective as possible? If short term profitability is crucial, an upgrade path plan may not be as important.

Technology continuity

How important is technology continuity across regions for consistency of rollout and ease of operations?

Rolling out different technologies in neighbouring areas could result in a complicated rollout and difficult maintenance

The outliers matter – why?


When determining a technology for each premises, simple models may use pre-defined boundaries to define rollout regions, and then calculate the average cost and profit per region. This is usually done to simplify the problem, because the human brain cannot deal with the complexity of considering each premises at a time. But what if this assumption leads to seriously inaccurate calculations further downstream?

Fibre Continuity 2
A simple example of premises within pre-defined regions

Consider pre-defined boundaries such as suburbs, or some defined by existing infrastructure such as aerial or duct networks. These could have a huge variation in density, and therefore any calculation to determine the average cost of a rollout, or the average speed of the connection, could be very inaccurate.

Fibre Continuity 1
A simplified diagram of a suburb. Notice the outlier on the left: the distance to this will distort any ‘average distance’ calculation for this region, and any others that rely on that value, such as ‘average connection speed’ or ‘average trenching/hauling cost’.The average values will be inaccurate for most premises within this boundary.

Read more

On a fresh Saturday morning a group of around 50 people gathered together down at Inspire9 for a single purpose – to talk about science!

It was called the Open Science Workshop and it looked a bit like this:



The morning started with a great motivating talk by Alex Ghitza – a senior lecturer at The University of Melbourne.


We were reminded that reproducibility of work is important; that we shouldn’t worry if not every uses the same tools as us, and that we should just create interesting things.

We then dove in to version control with Git and GitHub by creating repositories centered around recipies (anyone who knows me knows I love toasted cheese sandwiches.)

After lunch, we heard about one of my favourite websites – SciRate – from Jaiden Mispy:


We also learned about pull requests and the brutal feedback that comes with them


It wasn’t all business though; there was time for a bit of fun:


We finished the day in style with pizza and drinks – infact a whole bunch of pizza:


All of the days talks can be found here on GitHub as well as a summary of the day – I encourage you to take a look, as we covered a lot.

All in all, a very fun time was had, a lot of cool people showed up and were generally excited about open science and the tools around it. I have to extend a big thanks to our sponsors for the day – GitHub themselves, Inspire9, and Biarri Networks. Big thanks also to Richard Moss for the photos.