5 min read

3 Fiber Network Input Data Traps and How to Avoid Them

3 Fiber Network Input Data Traps and How to Avoid Them

Fiber optic network projects can run into challenges from various places. One of the most common places stems from overlooking input data quality from the beginning. 

Input data quality is even more important if you’re looking to get the speed and scale of executing your project through a data-driven, almost completely ‘digital’ approach. If a design is based on poor data, crews can run into unanticipated problems in the field, which can lead to projects missing deadlines and going over budget.

The earlier your data problems are addressed, the faster you will get to revenue. You may not think so right away, but once all the cards are laid in the right order, you will be able to blast through the process. In fact, you’ll actually get there faster and have a more cost-effective model while others are taking shortcuts that they literally will pay for later on. 

Before we dive into the input data traps, let’s first define what ‘good’ input data means. 

At a high-level, you want your data to be connected, accurate, and complete. Therefore, good input data should:

  1. Show as accurately as possible where the network subscribers are located and how much fiber to drop at each location
  2. Show as accurately as possible candidate paths that are available to run cable along and candidate points along the path at which equipment can be placed.  If this path is aerial, this means the pole locations and how they are connected (i.e. spans) are important. If this path is underground, this means the pit locations and how they are connected (i.e. ducts) need to be an accurate reflection of the physical reality, 

Now that we have a sense of what good input data means, we can start navigating input data traps. 

 

Trap #1:  Duplicate Addresses (or Demand Points)

Problem

Duplicate addresses are a common type of input data error. Let’s say you have a single data source showing all subscriber locations, types, and quantities. Perfect, you’re well on your way. 

There are a few common places duplicates pop up and areas to keep an eye on. We’ll include examples below of: 

  • Perfect duplicates 
  • Subtle duplicate 
  • Compound duplicates

A common type of duplicate is the ‘perfect duplicate’, which can pop up regularly from postal service data. For example, there are two identical entries for 123 Biarri Road. 

Or a duplicate may be more subtle. An address may be 123 Biarri Road while another is listed as 123 Biarri Rd. Subtle, but significant. 

However, the biggest headache usually comes from compound addresses. Let’s say it’s Unit 4 for 123-127 Biarri Road. Without identifying a flawed compound address, you risk over-servicing the premises with multiple fiber allocations. 

 We can’t blame the postal service for these examples of data flaws. They aren’t considering fiber projects, they’re considering letterboxes for mail to be delivered to. Luckily, you have some options for avoiding duplicate traps from happening to your project. 

 

Solution

You could manually sort through the data cell by cell and check for any errors but this is time-consuming and ironically, error-prone. 

The most effective option is to deploy a set of automated tools to scrub this dataset once and for all, addressing all these duplicates amongst other issues within your input dataset.

The good news is that this type of functionality is a subset of the digital network engineering approach. This approach is gaining traction since it not only includes steps to resolve these types of issues, but also aids the broader end-to-end engineering, and ultimately construction of your fiber network.

 

Trap #2: Internal Inconsistencies

Problem

When data comes from multiple sources, errors may arise when trying to merge these sources. People can use and interpret data differently. 

A common example is land parcels -- typically defined by polygons. Normally, discrete parcels are tied to discrete owners. But there are different ways of representing that information depending on which jurisdiction provided the dataset.

We can’t blame the town planners and architects. They’re familiar with their local policy and to them, the data looks consistent and usable. However, for a fiber network project, the data inconsistencies can be costly. 

 

Solution

One of the most effective solutions is to make all of your data work seamlessly together to create a standardized process and data schema. This will enable all stakeholders to effectively transfer data. This includes municipalities, telcos, engineers, finance, permitting, public, and PPP teams. 

What data can you standardize across parties? 

 

Demand Generating Elements

  • Addresses (or Demand points)
  • Parcels
  • Buildings
  • Lateral points

Demand points can come from parcel data, CAD data, or building data as polygons. 

 

Environmental Elements 

  • Street centerlines
  • Curb
  • Surface types 

Use this data as a basis for network cable paths if nothing exists already or to build on what is currently there. 

 

Physical Elements

  • Conduit (underground routes)
  • Pits or Underground Structures (underground node points)
  • Poles (aerial routes) 
  • Span (aerial node points)  

Check if there are reusable assets as well. 

 

Preferences

  • Preferred tier hub locations
  • Preferred paths
  • Areas to avoid
  • Blockers
  • Do-not-dig

 

One of the many reasons why it’s important to have consistent data is that it allows you to effectively use tools such as Fiber Optic Network Design software. To get the most out of such software, it makes sense to deal with data issues first. 

 

Trap #3: Missing and Bad Data

Problem

As an example, imagine a mobilized construction crew discovering that an aerial strand over a railway line does not exist (even though the data says it does). Dealing with permitting issues and/or design changes can take weeks and severely impact both the build cost and schedule. 

Why might this happen? 

One reason may be the assumption that all the input data given is correct and complete. The geometric path of the conduit should match the physical world. Unfortunately, that’s not always the case. 

Consider how often that data is updated and maintained. Usually, the demand data available is not 100% up to date and reliable. The data may be a few years old and during that time, new apartments can get built, etc. 

 

Solution

To avoid this from happening, ask your team, what input data do we need and what don’t we have? 

To start, there are a few free sources that can be useful. Openaddresses.io is a great free resource for address locations and open street map buildings. In addition, Google earth pro and QGIS are also a solid free starting point. 

Sometimes, projects don’t have specific address points but may have another set that can be used to infer the subscriber location, such as building footprints or land parcel boundaries. 

However, the best way to make sure your data is right is to perform a field survey. This can either be done by you and your team or subcontracted out to a company with those survey capabilities.

The end-to-end design process should include two survey stages:

1. Field Survey 

The purpose of the field survey is to confirm the subscriber locations, differentiate between an SDU and MDU, get the home count right per building, and define service drop off locations.  This is arguably the single most important input to get right and hence the time spent upfront ensuring this data is accurate is time well spent.

2. Design Validation Survey

Design validation survey address data, and scrubbed input geospatial data, are then combined to create your high-level design. This HLD can then be surveyed. The main purpose is to assess the constructability of the design - ie: can the physical infrastructure required to support the design be built (in the case of new assets)? Or can it be used as designed (in the case of existing assets)?  

The overall goal is to have the geometry of the input GIS data used to generate a design to reflect the physical reality of where aggregation nodes and cables will be placed. Deploying a team to validate your data early will ensure you’re making decisions based on accurate data. 

 

Conclusion

In conclusion, the time you spend scrubbing and collecting your data from the beginning will pay dividends when it comes time to build. It’s not just time to construction, it’s also time to revenue, and the fastest way to get there is to prepare from the start. Don’t take a wait-and-see approach, review the data, standardize it, and agree on it as a team. 

Ultimately, it’s more cost-effective to spend a bit more time upfront cleaning data than having to stand down construction crews because the design was flawed from the start -- permits not obtained, materials not available, etc. Luckily, now that you know where input data traps may pop up, you have the means to make sure your fiber optic network stays on track and on budget. 

Stay tuned for our next piece on questions you should always ask before deploying a network and answers to help get your project off the ground. 

 

Need help with your fiber network input data? We're happy to help! 

Chat With Our Team

Navigating the Funding Stage of FTTH Projects: Strategies and Considerations

Navigating the Funding Stage of FTTH Projects: Strategies and Considerations

NAVIGATING THE FUNDING STAGE OF (FTTH) PROJECTS: STRATEGIES AND CONSIDERATIONS

Read More
UK Insights: FTTH Europe 2024

UK Insights: FTTH Europe 2024

This year’s #ftth24, the annual conference meticulously organized by @fttheuropecouncil, took place in Berlin, commemorating the 20th anniversary of...

Read More
FTTH Project Pitch: Mastering Impactful Decision Making in Four Steps

FTTH Project Pitch: Mastering Impactful Decision Making in Four Steps

As we look to close the digital divide, deciding which communities to connect or prioritize can feel like finding a needle in a haystack. In this...

Read More