Some systems are experiencing issues.
This page provides real-time information about any known outages or incidents affecting Infinite Networks services. As Canberra’s trusted business Internet Service Provider, Infinite Networks delivers Business and Personal Broadband through Vision and nbn, offers Fibre Ethernet solutions for businesses and enterprise, and provides private and managed networks along with application development services. Check here for the latest updates on our service status.
Last updated 2025-06-20 21:03:21
G.fast and VDSL2 services in the ACT
Last updated 2025-02-12 11:33:30
nbn (National Broadband Network) tc2 and tc4 services
Last updated 2025-01-29 12:42:19
Mobile Broadband Services
Last updated 2025-01-29 12:42:50
Infinite Private Networks using MPLS Private Networking and SDWAN
Last updated 2025-01-29 12:43:12
Symetrical Fibre Ethernet Services
Last updated 2025-01-29 12:43:45
3CX cloud phone systems
Last updated 2025-01-29 12:36:53
Inbound SIP Trunks
Last updated 2025-01-29 12:45:45
Individual phone lines included with Broadband services
Last updated 2025-01-29 12:46:44
Co-located equipment in Canberra and Sydney
Last updated 2025-01-29 12:47:36
Inbound and outbound mail servers for hosted email and @infinite.net.au
Last updated 2025-03-06 18:22:18
Virtual Private Servers (VPS) hosted in Canberra and Sydney
Last updated 2025-03-06 18:22:27
cPanel based shared hosting
Last updated 2025-01-29 12:36:53
Upstream transit providers
Last updated 2025-03-06 18:21:56
Networks we peer with
Last updated 2025-03-06 18:22:08
Connectivity between our data centres
Last updated 2025-01-29 12:36:53
Managed VPN services
Last updated 2025-03-05 19:42:48
Last updated 2025-03-05 12:34:38
Infinite Networks Phone support (1300 790 337 / 02 6137 1337)
Window: 0000hrs to 0600hrs (AEST): Impact: Outage Expected Outage: Up to 10 Minutes Description of Work: Our upstream IP Transit provider in the ACT has planned network migration during this time. We do not expect a huge impact as the routes for this provider will fail over to our Sydney presence.
4:01PM: An official email from Megaport has been received acknowledging the broadcast storm today. We have been advised that "An official report outlining the root cause, restorative actions and any mitigation actions will be provided within the next 3-5 business days."
3:25PM: Resolving incident as cause has been identified and Megaport IX has been placed back into production.
2:12 PM: This has been confirmed to be a broadcast storm of 600Gbit of traffic across the Megaport IX NSW peering links. The graph can be viewed at https://www.megaport.com/services/ix-statistics/
2:01 PM: We have identified the peering issue to be with Megaport IX in NSW. A broadcast storm occurred across the domestic peering. This link connects to Microsoft, Office 365, and Google among other cloud services. Other ISP's observed this issue and had to mitigate the service to stabilise routing.
1:49 PM: It appears other ISPs and networks observed an outage also related to domestic peering. No additional details but Infinite is not the only ISP affected.
1:26 PM: We have identified one of our peering links that connects to major cloud providers has gone offline and we are investigating the cause. In the interim we have turned off peering to this link and routed traffic to a secondary link.
1:25PM: Outage finished
1:05 PM: Outage begun
9:45 AM: We have identified that a component with our Sydney Link/Datacenter to Canberra has malfunctioned we are currently working to get services restored.
9:48 AM: We will have an update in the next 30 minutes
9:50 AM: We are failing over services to our backup and services will restore shortly.
10:10 AM: Primary carrier has had a major failure which has affected our services link across states, we are currently working on the failover to restore services until we are able to restore the link, we apologize for the inconvenience caused
10:16 AM: Our failover has completed but did not operate all services cleanly, we are working to restore services whilst also looking into the core of the issue and a fix
10:34 AM: we have been in contact with Primary carrier and our suppliers to restore services as our engineers are currently rushing to restore services and Primary carrier are already investigating the power issue - currently we do not have an ETA but we will keep updating status every 30 minutes.
10:42 AM: the Primary wavelength service between Canberra and Sydney has failed and the secondary wavelength is experiencing packet loss. We have tickets lodged with both carriers and awaiting an updates. We are also looking to do what we can to stabilize the network in the mean time
11:10 AM: The Secondary Carrier and Primary Carrier have identified the issue and are currently restoring services, as the repairs commence services will begin to come back up.
11:19 AM: The network has stabilized due to the secondary carrier/supplier having resolved their packet loss issues but we still have the primary service down and we are working with the engineers to resolve the issue.
11:53 AM: Services are now being restored, your connection/hosting may go down for a short period and back up during the restoration.
12:24 PM: Services are restored but may experience instability as we work with our Primary Supplier to repair the core of the issue to avoid interruptions in the future and ensure stability.
12:42 PM: We have incorrectly diagnosed the issue with the carriers. We have had a core switch module in Sydney fail that terminates the connection. It is reporting that its status is ok but the card has clearly failed. We have pulled it out of service to stabilise the network. We are running at a reduce redundancy and we have observed packet loss across our secondary link between Canberra and Sydney. Everything should be stable but because we are running at reduce redundancy if there is any outage we may have another issue.
01:33 PM : After pulling the switch module out of service by shutting down the affected interfaces, we have power cycled the failed module remotely. The module is now reporting as failed, we have the option for the Data Centre provider to reseat the module using remote hands but as we have a team member available to replace the card in Sydney we are going in there this afternoon to swap the failed module out. We will confirm that it works but we will cut it over later tonight to avoid any disruptions, if we have any further outages after the module is swapped we may choose to put it into production stabilise the network. We will provide more updates as information comes to hand and we will provide a detailed post incident report upon request with a summary listed on this page.
03:35 PM: Our Sydney team members are arriving at the data centre with the replacement module. The module is not in service and there are no expected outages to replace it, as it can be hot swapped. We will move the SFP and patch cables over to the replacement module and ensure it powers on correctly and check the status of the card. We have scheduled at 09:00 PM tonight to enable the affected ports again and bring the network redundancy back online. If everything works correctly then there may be a short 5min window where the network will adjust and bring the primary link between Canberra and Sydney back online. We do not expect any major outages to resolve the issue.
4:50 PM: We are seeing issues across the network again. Investigating the cause and we may look at activating this replacement module earlier than 9PM to resolve the issue. Updates will follow.
4:58 PM: We saw a small outage on the secondary link to Sydney causing an outage but the network has recovered. If this happens again we will bring the replacement module online again right away rather than waiting until 9PM tonight.
8:30 PM: Another small outage on the secondary link to Sydney has occurred, we are bringing on the primary link 30mins earlier to fix this issue.
8:45PM: The switch module has been enabled but the interface the primary link between Canberra and Sydney has not come back up.
9:24 PM: We have lodged a fault with the carrier and determined that there is no light being received on their end. They have dispatched a tech to the Sydney data centre to investigate the issue. We will monitor the situation. There are no known network issues at this time.
10:45 PM: Primary carrier has tech onsite, there is light loss on the cross connect through the data centre. We have requested the data centre to investigate the cross connect and clean the fibre if necessary.
11:55PM: We are awaiting the data centre to clean and test the fibre, ETA is 30mins.
12:38AM 8/6/2021: Patch cable replaced due to low light levels. Primary link interface is now up and being put into service. This should be working in the next 5mins.
12:47AM 8/6/2021: Network redundancy is restored and everything is working as expected. We will continue to watch this through the night to ensure there are no further issues.