Blog Posts related to Fiber Network Planning

It’s a simple question ­how much spare fiber optic capacity should be placed in the network and where?

It can have a massive impact on your costs,­ too much and you’re wasting capex, too little and your opex costs go through the roof. Coming up with the right answer is not as easy as it might seem. The question is not how much fiber optic capacity do we need long term, but how do we get the capacity when we need it?

The Main Mistake: The wrong amount of spare fiber optic capacity in the wrong spot

Take a hypothetical simple 3 tier home run FTTP network (no splitters):

  • At first glance, it looks like we could have 12x DPs (Demand Points) per MPT (Multiport), and 12x MPTs per FDH (Fiber Distribution Hub).
  • However, the expected average take-up is 50% so that means up to 24x DPs per MPT.
  • There is also an expected growth in DPs of 20%, so 20x DPs per MPT (24 / 120%) is possible. This places all the spare capacity at the MPT which will mean every new DP has only a drop to build so minimal construction and an overall minimal build.
  • But the growth in DPs isn’t evenly distributed. Half of the growth is new clusters of 20 DPs, and the other half is scattered throughout the existing DPs. Now, the answer might be 22x DPs per MPT, and 11 MPTs per FDH.
  • It is expected to take a number of years to achieve that take up. So if we take the time value of money into account, it will be better to delay some build until it is required. Now the answer might be 24 DPs per MPT, and 10 MPTs per FDH.
  • However, the distribution of take up is highly variable. Some MPTs will fill up much quicker than others and so then the answer might be 20 DPs per MPT, and 10 MPTs per FDH.

As you can see, as more pieces of information get added to the calculation, the optimal distribution and quantity of spare will change. Add to this the other factors discussed above and it’s easy to see why some carriers don’t even attempt to run a model and thus make so many costly mistakes. Being able to model these variables and run simulations becomes a powerful tool in identifying how much spare capacity to place at each point in the network. It can also help understand what the cost implications are for different sparing rules, both for short and long term costs.

The other aspect to consider is the cost difference between the minimal capacity option and the higher capacity option. For example, when choosing between a 144f cable and a 288f cable:

  • Cost of 144f cable now = $20/m
  • Cost of a 288f cable now = $21/m
  • Cost of a future second 144f cable = $10/m (Net Present Value)

If there is a 20% chance that a 144f cable will be insufficient long term, then the cost of installing a 144f cable first is $20 + $10*20% = $22/m, therefore the architecture is better to specify that a 288f cable is required.

How can we overcome these challenges?

As stated above some carriers don’t even attempt to run a model and thus they end up making a lot of costly mistakes.

Through automation and optimization across FTTx network design, we can re-evaluate how to simulate fiber optic capacity across a network. With detailed designs running at the core of a simulation, you can be far more confident across long term growth and ongoing strain across your fiber network.

Want to see how we do it? Why not schedule a workshop today?

A 2013 study on Australia’s National Broadband Network, 3 years into the rollout, found that capex savings of over 25% on the FTTP rollout could have been easily achieved ­ with no flow on negative effect ­ simply through changes to the architecture. This national effort aims to bring robust broadband to all of Australia, and has extensive urban and rural components. A further 15% of savings were identified but not pursued as it may have caused an increase to the operational expenses. Below are a number of items that, if well planned, can make a tangible difference in the success of any FTTx network rollout.

Mistake 1: Only looking at homes

Many FTTx networks begin with a goal to serve homes. But getting to those homes means passing a whole range of other locations with revenue potential. Maybe they won’t fit into the business model, but not considering a full range of opportunities puts an FTTx network at a disadvantage from the start. Some customers want high bandwidth services, some want multiple services, and others want just a basic broadband service, but a well designed architecture will be able to cater to any and all of them without adding much or any complexity.

Mistake 2: Assuming all Multi Premises Sites are much the same

When we first think about sites with multiple premises we think of apartment buildings, but that is just one type. If you try to service a business park using an architecture made for apartments, you are likely to have a network that simply doesn’t match the requirement. Recognizing the variations that exist will help you develop a flexible architecture.

Mistake 3: Simplistic Modelling of network and demand

Draw an “average” street and you can come up with an architecture that achieves perfect utilization. Put that same architecture against a real suburb however and you may find it consistently fails to deliver what it is supposed to. Any proposed architecture needs to be run against a simulator to see how conforming designs respond to different densities, take up, existence or otherwise of duct and aerial infrastructure.

Mistake 4: Underestimating the cost of complexity

“It’s just one more tier in the network”; “It’s just one more technology in the mix”; or “It’s just one more service offering”. They may seem innocuous, but the cumulative effect can multiply. Not all complexity is bad, but recognizing the cost is important in determining whether to proceed.

Mistake 5: Assuming staff will follow instructions

Human behavior can be unpredictable, but one thing is certain, people will sometimes fail to read or understand or follow instructions. The result is delays, rework, and poor quality, which can happen at each stage of the process. But what does this have to do with architecture? There are a number of things that can be done in defining an architecture that minimizes the frequency or impact of these instances all the way through the chain.
Before looking at specifics, it is useful to understand the causes of this behavior. It’s not that people are malicious or careless — usually.

Mistake 6: Not planning for change

In our industry, the only constant is change. Our challenge is that fiber equipment such as cables and splice joints have an expected life of 20+ years, so how do we ensure we can get 20 years of valuable use from it? We can’t predict what will happen in one, five, or ten years, let alone 20 years. Nonetheless, ignoring the potential for change is a recipe for disaster.

How can we overcome these challenges?

Old methods of planning and designing FTTx networks can no longer support the complexities of FTTx deployments. We need to adapt with new technologies, compare the real underlying costs from plan, design and construction through to maintenance and we need to be able to be certain, to the best of our abilities that we’re making the right decisions.

Data. It seems every network project I’ve encountered has been stuck at this stage. As a design software vendor we are very attuned to the importance of consistent and comprehensive data (software tends to complain loudly and relentlessly if data is poor), but typically my customers are already aware of the challenge. The planning team have clutter in the address set, the design team have a duct (conduit) set in three map projections, and the construction team discover the pole file simply isn’t true.

The underlying problem is an incorrect assumption. Just because the data is useful for its owner, doesn’t mean it will be useful for you.

Not simply an ETL problem

The typical approach is to treat data acquisition as an ETL activity (Extract, Transform, Load). We will extract the data from the source database, transform it into the format and structure we need, and then load into our database. The recurring quip at Biarri Networks is “But it made sense when we drew it on the whiteboard.” The solution lies in ‘intelligence’ during the Transform stage, but before solving we need to understand the problem.

Think like an Historian

Where did this data come from and how did the owner make use of it? And critical in this question is whether they had a human-in-the-loop (HiL) process when they used the data.

Let me explain by way of example.

Duplicates

Consider an address set provided by a postal service. A major irritation is when there are duplicates in the data set such as the addresses to be served. Sometimes these can be perfect duplicates such as two rows with an entry for 123 Main Street. Your data team have probably already configured the ETL to find and resolve these.

Other times the duplicate can be more subtle such as 123 Main Street and 123 Main St (note the abbreviation). Or it could be a compound addresses such as Unit 4 123-127 Main Street. From the postal service point of view, the duplicates are harmless; they know there will only be one letterbox for the mail to be delivered to. A human will resolve the ambiguity. For the fibre project it is a different matter. By not eliminating these duplicates you risk over servicing the premise at Unit 4 123 Main St with multiple fibre allocations.

Internal inconsistencies

There was never a single source data set. Just because you were handed a single file, doesn’t mean it was created that way. Mergers and acquisitions may have resulted in a mega data set that is similar in format and structure only.

A classic case is land parcels which are typically defined by polygons. While the notion of discrete parcels with discrete owners is a cornerstone of our economy, there are numerous ways to represent the information often with different jurisdictions applying their own policy. One may represent sub-divisions as two distinct polygons overlaid on the original parcel – resulting in three overlapping parcels. Others may remove the original. And in more complex subdivisions there may be the private parcels and the common owned parcels interlocking like a jigsaw, all overlaid on the original parcel. Town planners and architects will be familiar with the local policy and so from their perspective the data is consistent and usable, but if your project spans jurisdictions you should expect each to have generated their data in a different manner. Unless you can detect and resolve these you risk process churn and confusion in your team.

Missing and erroneous data

Where a third party provides infrastructure data such as poles or conduit, we frequently assume the data will be correct and complete. For instance the geometric path of the conduit will match the physical world.

This ignores the maxim “If it ain’t broke don’t fix it”. When that third party built the infrastructure they needed reasonable data, but once in operation they have little interest in maintaining it, other than when a fault occurs. So if their infrastructure is stable, and the rare times there is a fault the service agent in the field uses initiative to fill in the gaps, the third party has no incentive to maintain their data.

Unless you establish a process and culture that is tolerant to these errors and omissions you risk creating a system that spends more time in feedback and rectification than in progressing the build of your network.

Free data. Sometimes it is the gift that keeps on taking.