Product Version 6.0 - Software Release Notes

Product Version 6.0 - Software Release Notes

  

  

 

Versions 


Version
Date
Author
Details
Rev1
2019-07-25
Actility
First Version.
Rev2
2020-01-07
Actility
- Addition of “Feature Interaction Matrix” for each feature.
- Updated the Feature Activation details for LRC record/replay tools
Rev3
2020-03-11
Actility
Section 4.3.2 (RDTP-2543): Add precision regarding the Logical Channel reported for RX2.

Reference Documents


Documents
Author
01
ThingPark Version 6.0 Integration Notes
Actility
02
LoRaWAN Backend Interfaces v1.0
LoRa Alliance
03
ThingPark Wireless Device Manager User Guide
Actility
04
ThingPark Wireless Network Manager User Guide
Actility
05
ThingPark Wireless LRC-AS Tunnel Interface Developer Guide (LoRaWAN™)
Actility
06
TP Wireless Radio Parameters User Guide
Actility
07
ThingPark Wireless_KPI_Dashboard_User_Guide
Actility
08
ThingPark Wireless Alarm Description Guide
Actility

Scope

The scope of this document is to describe the release notes of the ThingPark Wireless Product Software release 6.0.

  1. Version 6.0 subcomponents versioning
  2. Version 6.0 list of new introduced features (User Stories) and release notes for all ThingPark software components 
  3. Issues Resolved for Customer Project/Customer Support 

ThingPark Solution Overview

The ThingPark set consists of four main key components:

  • ThingPark Wireless – Core network and OSS

  • ThingPark OS

  • ThingPark X

  • ThingPark Enterprise

The ThingPark platform is a modular solution enabling Network Operators to:

  • Deploy LPWANs based on LoRaWAN™ or LTE with ThingPark Wireless .

  • Manage, activate and monetize IoT bundles (device, connectivity, and application) with ThingPark OS .

  • Provide value-added data layer services, such as protocol drivers and storage with ThingPark X .

ThingPark OSS acts as the central System Management Platform (SMP), enabling all other ThingPark platform modules with base capabilities such as subscriber management, centralized authentication and access rights, and workflow management.

ThingPark Enterprise is an Internet of Things (IoT) platform that manages private LoRa® Networks. The ThingPark Enterprise edition is used by companies to support their specific business.




Figure 1 - ThingPark Solution Architecture Description High Level Product Illustration
Note that the modules above may be representing a physical server, a function, a service or a business support layer as part of the overall ThingPark solution and not necessarily a physical HW server.


Subcomponents Versioning

ThingPark OS

Sub-component versioning:

MODULE
NAME
DESCRIPTION
VERSION
BILLING
billing
Online Billing
4.0.0
IDP
acy-keycloak-smp-extensions
Actility Keycloak Component
2.0.1
IDP
acy-std-keycloak
Actility Keycloak Component
4.4.0.Final-2
IDP
smp-keycloak-utils
Actility Keycloak Component
2.2.0
LOCALES
smp-auth-gui-locales
Localization
2.0.2-1
LOCALES
smp-billing-locales
Localization
4.0.0
LOCALES
smp-dashboard-kpi-gui-locales
Localization
3.0.4-1
LOCALES
smp-locales
Localization
12.2.2
LOCALES
smp-portal-locales
Localization
7.12.0
LOCALES
smp-subscriber-dashboard-kpi-gui-locales
Localization
1.0.5-1
LOCALES
smp-thingparkstore-locales
Localization
4.4.1
LOCALES
smp-tpe-gui-locales
Localization
7.6.0-1
LOCALES
smp-twa-admin-locales
Localization
10.6.2
LOCALES
smp-twa-locales
Localization
12.2.7
LOCALES
smp-wlogger-locales
Localization
9.10.0
PORTAL
portal
User Portal
7.12.0
SMP
smp
System Management
12.2.2
SMP
smp-drivers
SMP drivers (bpmn workflows)
12.2.2
SMP
smp-drivers-tools
SMP drivers (bpmn workflows)
1.2.16
SMP-CRONS
smp-crons
Batch scripts on SMP executed by the crond
5.0.1
STORE
store-prestashop
Online Store Prestashop Module
4.0.3
STORE
store-thingparkstore
Online Store
4.4.1
STORE
store-tpspayment
Online Store Payment Module
4.4.0

ThingPark Wireless

Sub-component versioning:

MODULE
NAME
DESCRIPTION
VERSION
CATALOGS
base-station-profiles
Base Stations Catalog
2.1.2
CATALOGS
device-profiles
Devices Catalog
4.11.1
CATALOGS
rf-regions
RF Regions Catalog
1.0.5
DCOS
acy-dcos-login
DCOS Login
1.0.1
DCOS
acy-dcos-oidc-provider
DCOS OIDC Provider
1.0.2
DCOS
kpi-base-image
Base image embedding entrypoint, submit, scheduling scripts for kpis
1.3
DCOS
mongos
Docker image to instantiate mongos container on DCOS cluster
0.6
DCOS
spark-executor-image
Base image used to run spark executors with sending graphite metrics
1.0
DCOS
sqlproxy
Docker image to instantiate maxscale container on DCOS cluster
0.5
HSM
hsm
HSM
1.2.1.2
KK_BROKER
acy-kafka
Actility kafka server package
1.1.2
KK_BROKER
operator-records-routing
Kafka processor
2.4.0
KK_BROKER
twa-kpi-kafka-fix-syntax
Rewrite untyped json to typed JSON messages
DEPRECATED
LRC
lrc
LoRa Network Server (including libraries)
1.16.15
LRC
lrrfwfetch
LRR firmware upload on the LRC server
1.0.5
LRC-PROVISIONING
lrc-binding-http
LRC http provisioning interface
2.0.4
LRC-SCHEMA
lrc-schema
XSD schema for RF Region Validation
1.0.4
LRR
lrr-wirmaar
Base Station - Kerlink wirmaar
2.4.73
LRR
lrr-wirmav2
Base Station - Kerlink wirmav2
2.4.73
NETWORK-SURVEY
nssa-network-survey
Network Survey
see WIKI
RCA
rca-provisioning
RCA provisioning tool
1.3.3
RFTOOL
rfregtool
Generation tool of the packing of the RF region file on the LRC to prepare the provisioning of the LRR
1.1.8
SHELLINABOX
shellinabox-proxy
Proxy and security for shellinabox
3.0.1
SLRC
key-installer
Actility key-installer
2.0.1
SOLVER
locsolver
Location solver
1.2.22
SPARK
kpi-sample
Custom kpi samples
2.2.3
SPARK
twa-kpi-spark-operator
Docker image including all operator kpi applications
3.0.3-2.0.4
SPARK
twa-kpi-spark-subscribers
Docker image including all operator kpi applications
3.2.5-2.0.4
SPECTRUM-ANALYSIS
nssa-spectrum-analysis
Spectrum Analysis
See WIKI
TPO_CONF
tpo-conf-ansible
Server configuration automation
6.0-1.0.0-SNAPSHOT-28-06-19
TWA
task-notif-ws
TW Asynchronous tasks notification server
2.0.1
TWA
twa
Wireless OSS (Network Manager / Device Manager)
12.2.7
TWA
twa-dev
Consumption of OSS.DEV Kafka topic for MongoDB integration
4.0.7
TWA
twa-dev-task-fc
TWA-DEV asynchronous Sub Task flow control
2.0.2
TWA
twa-ran
Consumption of OSS.LRR Kafka topic for MongoDB integration
3.0.8
TWA
twa-task-res
TWA-DEV Task results producer
2.0.2
TWA-ADMIN
wirelessAdmin
Wireless OSS (Operator Manager / Connectivity Plan Manager)
10.6.2
TWA-CRONS
twa-crons
Batch scripts on TWA executed by the crond
6.2.2
TWA-DASHBOARD
twa-dashboard-ws
Dashboard KPI server
3.0.6
TWA-DASHBOARD
twa-dashboard-ws-script
Dashboard KPI server tools
3.0.5
TWA-SUBSCRIBER-DASHBOARD
twa-subscriber-dashboard-ws
Subscriber Dashboard KPI server
1.2.2
TWA-SUPPLY-CHAIN
twasupplier
Supply Chain (Application sample)
8.0.8
TWA-SUPPLY-CHAIN-CRONS
twasupplier-crons
Batch scripts on Supply Chain executed by the crond
1.0.1
TWA_KPI
twa-kpi-kafka
Consumption of KPIs in Kafka and import to MongoDB
3.2.3
WLOGGER
wlogger
Wireless Logger
9.10.0

ThingPark System

Sub-component versioning:

MODULE
NAME
DESCRIPTION
VERSION
ALL
acy-java-11-adopt-openjdk-jre
ACTILITY JAVA JRE 11
11.0.2_9
ALL
acyec
Actility PHP module to decrypt php files
1.0.0
ALL
maxscale
Maxscale
1.4.3-1
ALL
maxscale-tools
Maxscale Tools (logrotate)
1.0.1-1
ALL
tpk-extra-tools
Tool to manage configuration files
1.4.4
ALL
wildfly
Wildfly version 15.0.1
15.0.1.Final-2.el6
AS-PROXY
nginx-as-proxy-conf
NGINX configuration package for AS-Proxy
DEPRECATED
HTTP-PROXY
nginx-http-proxy-conf
NGINX configuration package for HTTP-proxy
10.4.2
MONGO
mongo-tools
MongoDB Tools Actility (logrotate)
1.6.2
MONGO
twa-kpi-mongo
TWA KPI mongo scripts for mongo database
3.2.5
MONGO
twa-mongo
TWA mongo scripts for mongo database
12.2.7
SQL
acy-keycloak-sql
Actility Keycloak SQL Component
4.4.0.Final-2
SQL
acy-liquibase
Actility SQL Upgrade engine based on LiquiBase
1.0.2
SQL
mysql-tools
Mysql Tools (script saveDatabase, restoreDatabase, logrotate)
1.2.3
SQL
smp-sql
SMP SQL scripts for SQL database
12.2.2
SQL
twa-sql
TWA SQL scripts for SQL database
12.2.7
SQL
twasupplier-sql
TWA Supplier SQL scripts for SQL database
8.0.8
TOOLS
tool-module-initialization
Initialize module like network Manager, Application manager
1.4.0
WEBAPP
smp-auth-gui
SMP AUthentication GUI
2.0.2-1
WEBAPP
twa-dashboard-kpi-gui
Dashboard KPI GUI
3.0.4-1
WEBAPP
twa-subscriber-dashboard-kpi-gui
Subscriber Dashboard KPI GUI
1.0.5-1



Version 6.0 Newly Introduced Features (User Stories)

Maximization of LoRaWAN Coverage Footprint features (roaming)

RDTP-4577: LRC-TEX integration via enhanced wildcarding functionality

Feature Description

This feature provides significant enhancements to the backend configuration model to simplify the integration between ThingPark Wireless and ThingPark Exchange (TEX).

ThingPark Exchange (TEX) provides a star topology to inter-connect Network Servers and Join Servers in order to support LoRaWAN roaming and device activation flows.


The following diagram illustrates the message flows using direct connection mode between the different components of the LoRaWAN backend: fNS, sNS and JS. It also shows how the different roaming tables (RoamingOperator, TrustedOperator
and ForeignOperator) are used to route outgoing messages and authorize incoming messages between these components.




The following diagram illustrates the backend message flows using star topology , by inserting ThingPark Exchange between the different backend components to facilitate integration:




Prior to release 6.0: To integrate Actility Network Servers (LRC) with ThingPark Exchange, the TEX routing profile is configured instead of the routing profile of the roaming partner.
While this configuration allows full operability between LRC and TEX, its drawback resides in the configuration complexity: no possibility to define a  default route for outgoing traffic towards TEX.
Note that it was already possible to wildcard the transmission of HomeNsReq messages from fNS to JS since TPW release 5.1 (RDTP-4576).


Starting release 6.0: LRC outgoing/incoming traffic can be routed to peer network servers/join servers via two options:

  • Direct connection In this case, the routing profiles configured in the roaming tables (RoamingOperator and TrustedOperator tables) point to the peer server without going through TEX

  • Via TEX In this case, a default routing profile is configured for RoamingOperator and TrustedOperator tables to point to TEX. This default routing profile allows wildcarding these roaming tables to spare the TPW administrator the need to do this configuration for each pair of backend nodes connected through TEX.

NOTE: The implementation of this feature allows mixing direct and TEX modes for the same LRC since the wildcarding function is triggered only when a direct routing option is not configured for a given pair of nodes (identified by their NetID or JoinEUI (also known as AppEUI)).

In addition to wildcarding, this feature brings other improvements:

  • When the LRC receives a Join Request having a DevEUI known to the LRC (i.e. the device is already provisioned in the LRC), it behaves as an sNS instead of determining fNS or sNS behavior based on AppEUI of the JoinReq (mechanism supported in previous releases, still kept in release 6.0 for backward compatibility). This enhancement is useful for devices personalized with arbitrary AppEUI/JoinEUI (not really pointing to the JS serving the device activation).

  • Likewise, when the LRC receives an uplink frame having a known DevAddr (i.e. the device is already provisioned in the LRC), it behaves as an sNS regardless of its NwkID. This enhancement is also useful for ABP devices personalized with a NwkID that does not match the Operator’s NetID.

Key Customer Benefits

Thanks to this feature, the LRC integration with TEX becomes easier to configure; allowing TPW Operators to fully leverage TEX benefits. ThingPark Exchange (TEX) offers the following advantages:


  • Easy and scalable integration between LoRaWAN backend nodes (Network Servers and Join Servers): thanks to its star topology, ThingPark Exchange provides a default routing profile for outgoing traffic of each backend node to avoid
    defining point-to-point routing profiles between the different backend nodes of the LoRaWAN ecosystem.


  • Centralized and extensible policy control.

  • Security shield against NS/JS peering nodes.

  • Billing and monitoring capabilities: UDRs, traffic statistics, dashboards…etc.

Besides the integration simplification with TEX, this feature allows mitigating potential issues related to ABP devices personalized with a NwkID not matching the Operator’s NetID: without this feature, such devices would stop working when the Operator setup a roaming agreement with the NetID personalized on these devices.

Feature Activation

This feature is deactivated by default. To activate a default route pointing to TEX, the ForeignOperator, RoamingOperator and TrustedOperator tables need to be wildcarded.

  • To wildcard < ForeignOperator>   table: Only the ReceiverID can be wildcarded, please see an illustrative example below:

  • To wildcard  <RoamingOperator>   table: Only the ReceiverID can be wildcarded, please see an illustrative example below:

  • To wildcard  <TrustedOperator>   table: Only the SenderID can be wildcarded, please see an illustrative example below:

NOTE: HTTP basic authentication is mandatory when TrustedOperator table is wildcarded.

Feature Limitations

The implementation of this feature is compliant to backend interfaces v1.0. The support of NSID (introduced in v1.1 of the backend interfaces spec) shall be supported in TPW release 7.0.

Feature Interaction Matrix


LoRaWAN backend interfaces

Radio Interface

TWA User Interface

OSS API

NS-AS Interface

Billing / UDR

Alarms









RDTP-5475: Class B operation for indoor base stations (PoC)

Feature Description

To operate Class B devices, base stations must implement a stringent time synchronization mechanism to be able to transmit beacon signals and downlink pingslots to class B devices. For outdoor base stations, this time synchronization is ensured by GPS receiver embedded in the base station. However, in most cases, GPS signals cannot be received by indoor base stations.

The scope of this feature is to provide the adequate timing synchronization mechanisms for indoor base stations (i.e. pico or nano gateways) to enable class B operation for these BSs.

Please note that the implementation of this feature in release 6.0 has some limitations (detailed in the “Feature Limitations” section hereafter) so it is not yet fully productized. Hence, this implementation is mainly targeting Proof-of-Concept (PoC) demonstrations.

The following timing requirements are necessary to allow class B operation for indoor gateways:

  • The precision of the beacon timing should be better than 1ms :

    • To transmit beacon signals simultaneously by multiple base stations, all those BS must be GPS-synchronized so that their transmissions are synchronized with 1 microsecond accuracy.

    • In absence of GPS synchronization, sending beacons at the same time by adjacent base stations creates radio collision over the air.


Solution: Use beacon randomization for indoor base stations, i.e. each BS randomizes its beacon transmission to minimize the probability of colliding with beacons of surrounding BS. Beacon randomization is already supported by ThingPark since release 4.3, it is activated at RF Region level via the parameter “BeaconRandomization” – for more details please refer to
[6].


  • The precision of the pingslot timing should be better than 1ms : While class B devices can tolerate pingslot timing drift up to 10ms, the maximum tolerable base station timing drift is set to 1ms to keep enough margin to compensate device’s clock drift.

This PoC implementation focuses on the indoor deployment use cases where the indoor base station is located within the coverage area of an outdoor macro base station that is GPS-synchronized, as illustrated by the following figure:



Since the indoor pico/nano BS (no GPS signal) is located within the coverage area of an outdoor macro BS (GPS-synchronized), it relies on this coverage overlapping (with macro diversity effect) to achieve the target time synchronization as per the following sequence:

  1. The LRR of indoor BS broadcasts periodic uplink LoRa SYNC frames over the air (every 15 minutes by default). The uplink payload of each SYNC frame sent by the indoor LRR includes the local timestamp of its radio board at the time of emitting the SYNC frame (considering the packet airtime).

  2. SYNC frames are received and GPS-timestamped by the outdoor macro BS, then forwarded to LRC over IEC-104 backhaul link.

  3. The LRC receives the SYNC frame of the indoor BS via the outdoor macro gateway and sends to back to the indoor GW the mapping local timestamp -> UTC timestamp. If the pico/nano BS has not received the LRC response within 10s, it sends another SYNC frame.

NOTE: If the SYNC frame is received by several other base stations, the LRC shall return to pico/nano LRR multiple UTC times, then the LRR decides which one it takes by excluding the values that are no GPS-timestamped.

NOTE: The LRR of indoor BS applies a drift compensation mechanism based on the last two mapping points received from the LRC.

The SYNC frame sent by the indoor pico/nano base station uses a proprietary MAC message type (MType = 111, authorized by LoRaWAN specification). This proprietary MType allows the macro BS to differentiate the SYNC frame structure from other LoRaWAN uplink frames.

Other deployment use cases shall be supported in the following ThingPark releases, namely:

  • Isolated single indoor pico/nano base station (no coverage overlapping with other indoor/outdoor base stations).

  • Indoor cluster of several pico/nano BS having overlapping coverage between them, without (or with partial) coverage from outdoor macro BS.

Key Customer Benefits

Thanks to this feature, ThingPark Operators can demonstrate the ability of indoor base stations to operate class B without GPS-synchronization. This feature extends the use of class B mode to all indoor applications, for instance water/gas meter and other actuator type of applications requiring reactive downlink transmissions from application servers.
Operating Class B for indoor BS also extends the use of multicast/FUOTA (Firmware Upgrade Over the Air) to indoor base stations. 

Hence, this functionality leverages heterogeneous network deployments (with a mix of macro outdoor + pico/nano indoor BS) to maximize LoRaWAN coverage footprint while offering the same level of services (class B and multicast) for all base stations.

Feature Activation

This feature is deactivated by default. To activate it, the following configuration changes are required on macro and pico/nano base stations:

  • On outdoor macro base station, the following flag should be enabled in lrr.ini. It instructs the outdoor macro BS to forward the pico/nano SYNC frame.


[utcsynchro]
           forward=1 ; 0 is the default value


  • On indoor pico/nano base station, the periodicity (in seconds) of SYNC frame transmissions should be defined in lrr.ini. 300 represents a SYNC frame every 5 minutes, the maximum value shouldn’t exceed 900s (15 minutes). Other parameters related to the logical channel and SF used to transmit the SYNC frame are also configurable in lrr.ini as shown below

[utcsynchro]
            period=300 ; 0 is the default value
            lc=1 ; logical channel to use for SYNC frame
            spfact=9 ; spreading factor to use for SYNC frame

Feature Limitations

The Proof-of-Concept implementation of this feature in release 6.0 has the following limitations:

  • The indoor base station must be within the coverage area of an outdoor macro base station that is GPS-synchronized. If this condition is not fulfilled, the indoor base station cannot support class B operation.

  • SYNC-frames sent by the LRR are not authenticated by a Message Integrity Code (MIC). This limitation shall be lifted in the productized implementation in future releases.

  • The configuration of the synchronization mode is done manually; it will be automatically controlled by ThingPark in the productized implementation.

  • The RF frequency and data rate used by SYNC-frames is configurable directly in lrr.ini in the current release; this configurable will be supported at RF Region level in the productized version.

  • The indoor BS synchronization mode (synchronized-by-macro) is not reported to Network Manager in the current release, hence the indoor BS is seen as “GPS-synchronized” as long as it remains synchronized via an outdoor macro BS.

  • Only unicast class B is supported in the current release, multicast class B is not yet implemented (will be supported in the productized implementation).

Feature Interaction Matrix

LoRaWAN backend interfaces

Radio Interface

TWA User Interface

OSS API

NS-AS Interface

Billing / UDR

Alarms









Scalability/Performance Improvement Features

RDTP-6447: LRCv2 Early availability (PoC)

Feature Description

The scope of this feature is to support a proof-of-concept (PoC) for LRCv2 in release 6.0.

LRCv2 is the long-term evolution of the current LRC sub-system, providing a horizontally scalable and fault-tolerant subsystem, that aims to replace LRCv1 in ThingPark Wireless deployments.

Two types of functions are introduced in the LRCv2 and are spread out on distinct virtual machines:

  • RAN function dedicated to the management of the LRR state machines (LRR TCP/IEC-104 connections, periodic LRR reports…etc.).

  • DEV function dedicated to the management of DEVICE state machines (uplink / downlink LoRaWAN processing, advanced radio algorithms (ADRv3, best-LRR selection…etc.) as well as implementing the tunnel interface with Application Servers.

The following diagram illustrates the main LRCv2 architecture:



  • Each RAN-GROUP or DEV-GROUP consists of two RAN instances, spread over  two physical site locations to support geo-redundant High-Availability architecture.

  • The LRCv2 is based on active-active architecture: in average, each LRC-RAN instance within the RAN-GROUP acts as primary for statistically 50% of the LRRs connected to this RAN-GROUP and secondary for the other 50% of the LRRs. Likewise, each LRC-DEV instance within the DEV-GROUP acts as primary for roughly 50% of the devices associated with this group and acts as secondary for the other 50% of the devices.

  • The RAN-GROUP serving each LRR is determined by a hash function on the LRR-ID. This function is designed in a way that equally balances the RAN load over the different RAN-GROUPs deployed in the platform. A “BOOT SERVER” component is introduced by LRCv2 architecture to inform each LRR of its RAN-GROUP configuration.

  • The DEV-GROUP serving each device is determined by a hash function on the DevAddr. Like for RAN-GROUP, the DevAddr hash function aims at equally balancing the DEV load over the different DEV-GROUPs deployed in the platform. For uplink frames, RAN hashes the DevAddr included in uplink frames to determine which DEV-GROUP should the uplink frame be routed to (note: a special behavior is implemented for Join Request frames until DevAddr is allocated by the LRC). For downlink frames coming from AS, the DevAddr hash function is implemented by the PROXY-HTTP Server.

  • The communication between RAN and DEV groups uses a Kafka message bus to bring additional scalability and resilience gains to the overall architecture.

  • RAN/DEV functions on LRC are fully integrated with their peers in TWA architecture:

    • LRR Reports produced by LRC-RAN are consumed by TWA-RAN (already supported in ThingPark architecture since release 5.2.2, also known as TWAv2].

    • Device Reports (uplink/downlink frames, location reports, downlink sent indication, multicast summary reports) are produced by LRC-DEV and consumed by TWA-DEV (also introduced in TWAv2 architecture).

  • All incoming flows to LRC remain accessible via a PROXY-HTTP Server, like LRCv1 architecture. However, the internal interface between PROXY-HTTP and LRC-DEV is based on Kafka brokers.

The scope of this feature in the current release is to validate the LRCv2 architecture in lab environment :

  • LRR bootstrapping and connection to RAN-GROUP.

  • OTA device attachment: Join procedure, DevAddr assignment…

  • Uplink/downlink frame processing (OTA and ABP).

  • Passive Roaming: fNS and sNS behavior.

  • Addition of new RAN/DEV functions to simulate capacity extension.

Key Customer Benefits

LRCv2 brings substantial scalability enhancements over LRCv1, these gains are illustrated by the following table:


LRCv1
LRCv2
Gain
Max # LRR connections (*)
50 000 LRR
500 000 (Macro) 10 Million (Pico)
x 200 (assuming 200 RAN GROUPs per LRC cluster)
Max LoRaWAN traffic load
1500 frames/sec
150 000 frames/sec
x 100 (assuming 100 DEV GROUPs per LRC cluster)
Max # devices
12 Million
1.2 Billion (average ~12 packets/devices/day)
x 100 (assuming 100 DEV GROUPs per LRC cluster)

(*) NOTE: The max # LRR connections with LRCv2 considers the following assumptions:

  • For macro gateways, LRR reporting periodicity = 300s (5 minutes).

  • For pico/nano gateways, LRR reporting periodicity = 3600s (1 hour).

The split of the LRC functions into RAN and DEV allows flexible (and independent) horizontal extension of these instances: RAN-GROUPS scale with growing number of LRR connections (driven by radio coverage extension and network densification requirements), whereas DEV-GROUPS scale independently from RAN-GROUPS according to the LoRaWAN traffic growth and the number of devices supported by the platform.

On top of its substantial scalability advantages over LRCv1, LRCv2 architecture also has some functional advantages: Unlike LRCv1, an uplink/downlink frame of a given device cannot be processed by several LRCs at the same time in LRCv2: thanks to DevAddr hashing, only one DEV instance shall be competent to process all the frames of a given DevAddr.

NOTE: the LRCv1 limitation described above shall also be lifted in TPW release 7.0.

Feature Activation

This feature is deactivated by default, i.e. LRCv1 architecture remains activated after the upgrade to release 6.0. Please contact Actility Support team if you want to run a PoC for LRCv2 architecture.

Feature Limitations

The LRCv2 implementation in the current release is not yet fully productized, it is only for PoC demonstrations in lab environment. The following limitations are present in the current release:

  • LRR-LRC security mode in the current release is only TLS-based, IPSec mode shall be supported in future releases.

  • Device/base station provisioning from TWA to LRCv2 is not supported in release 6.0 (TWA provisioning should use Kafka with LRCv2 instead of HTTP, will be addressed in future ThingPark releases). Hence, the base stations and devices used during the PoC phase shall be manually provisioned in LRCv2.

Feature Interaction Matrix

LoRaWAN backend interfaces

Radio Interface

TWA User Interface

OSS API

NS-AS Interface

Billing / UDR

Alarms









RDTP-5790: Modified use of blast mode in Application Server configuration

Feature Description

When a subscriber configures an Application Server through the Device Manager application, it is possible to define multiple destinations for the same AS. This option is essentially meant to allow the definition of alternative destinations in case the primary destination URL cannot be resolved / is not responding; a routing strategy defined by Device Manager as “Sequential”.

Prior to release 6.0: Two different modes were supported when configuring multiple destinations within an Application Server (AS):

  • Sequential mode: To send a frame to the AS, the LRC uses the first destination URL. If the LRC receives an HTTP 200-OK from this first destination, it does not send the frame to other destinations associated with this AS. However, if this first destination is blacklisted or is not responding, the LRC tries the second destination in the ordered list and so forth… At the end, the frame is never sent to more than one destination.

  • Blast mode: The LRC sends the same frame to all the destinations at the same time . However, the use of this mode could negatively impact the LRC performances (CPU load) in case of slow-responding/irresponsive AS using HTTPS tunnel interface.

Starting release 6.0: to improve the LRC performances and scalability, the blast mode is modified. While it is still possible to send the uplink frame to several destinations, the LRC does not trigger for all the destinations at the same time: the delay between two successive transmissions (to destinations Dn-1 and Dn) is limited by either the response time of Dn-1 or the CURL timeout of Dn-1 (whichever is smaller).

  • If the routing strategy is already sequential => no change in release 6.0.

  • If the routing strategy is blast => the LRC applies a short delay between transmissions towards the different destinations.

IMPORTANT NOTE: in release 6.0, it is still possible to configure a routing profile so that several destinations can simultaneously receive the same uplink frame without any delay. To do so, the subscriber needs to create a distinct Application Server (AS) for each destination URL then add those AS to their AS routing profile. Each individual AS shall have one or several destinations (for reliability purposes) in sequential mode.

This is Actility’s recommended approach.


Note: in future ThingPark releases, the AS Blast mode shall be totally deprecated from the Device Manager. After this deprecation, the only way to simultaneously send the same frame from LRC to multiple destinations is to define a distinct AS for each parallel destination.
For more details on the configuration of AS and AS routing profiles, please refer to [3].


Key Customer Benefits

This feature helps improving the LRC performances by protecting the LRC from slowing responding or irresponsive AS using HTTPS interface.

Note: This feature has no impact on Application Serves using Kafka to consume the LRC uplink frames.

Feature Activation

The conversion of blast into sequential strategy for an Application Server is activated by default. It can be controlled in the LRC by a systemwide parameter in lrc.ini :

[lrnoutput].blasttosequential

  • 0 : blast mode remains unchanged

  • 1: blast mode is converted to sequential by the LRC ( default setting ).

Feature Limitations

Not applicable.

Feature Interaction Matrix

LoRaWAN backend interfaces

Radio Interface

TWA User Interface

OSS API

NS-AS Interface

Billing / UDR

Alarms









RDTP- 7021: TWA Asynchronous write operations

Feature Description

This feature introduces a new framework to asynchronously manage device creation/deletion tasks for both unitary level and mass/bulk import actions.

A task is an asynchronous processing triggered by a webservice API request implemented by an Application. A task is associated with a set of generic attributes: ID, creation timestamp, owner ID, state… When the asynchronous processing is complex, the task is split into several sub-tasks.

Prior to release 6.0: Mass/bulk import of devices is processed  synchronously by ThingPark. The Subscriber can import up to 4000 devices in one-shot via .csv upload. Synchronous processing means that the user launches the request, then waits until the request is executed by the API to get the result notification. This mode implies that the task execution duration is constrained by the maximum allowed delay between an HTTP request and its HTTP response (set to 60s in the current product).

Thus, before release 6.0, the maximum number of devices that can be imported in one operation is limited to 4000 devices, to fit the 60s delay described above.

Starting release 6.0: In addition to the synchronous mode (kept for backward compatibility and still used by Device Manager GUI to synchronously deliver the notification to the End-user), the device mass import task can be processed asynchronously via new API webservices. Asynchronous processing means that the reception of the requested task is acknowledged by ThingPark and allocated a task-ID, then it is processed in background in parallel to other tasks. Hence, the task execution duration is not constrained anymore by a maximum delay as it is the case in synchronous processing.Importing devices using asynchronous mode is supported by the new webservice:

POST /subscriptions/\/devices/import?async

When the request is accepted by TWA, the API responds as per the following example:

202 Accepted

Location: /thingpark/notification/rest/1.0/subscribers/199983788/notifications/5a532e0b885ff2205364f897

The location URL provided in the API response can be used (e.g. polled) by the Subscriber to follow-up the execution status of his bulk import request.

Therefore, thanks to the asynchronous processing, the user can import up to 50 000 devices in a single operation, providing substantial scalability improvement to the mass import functionality.

To support this new framework, three new applications are added to TWA architecture:

  • TWA-DEV-TASK-FC: this application performs flow control to gradually inject the sub-tasks issued from the main task (typically mass import of devices) into the different Kafka topics. This flow control is crucial to efficiently parallelize the bulk import requests with other tasks to avoid blocking TWA resources on complex batch tasks.

  • TWA-TASK-RES: this application retrieves the results of the individual sub-tasks and aggregates them to produce a consolidated result for the original task.

  • TASK-NOTIF-WS: this application distributes the notifications to the different consumers.

Key Customer Benefits

For ThingPark Wireless Operators/Subscribers, the asynchronous import of devices offers higher scalability to ThingPark Wireless Subscribers.

Therefore, asynchronous import API allows extending the maximum number of devices per mass import request beyond the initial limit of 4000 devices. With this new API, it is possible to import up to 50 000 devices in a single mass import operation.

Additionally, the new framework introduced by this feature shall be leveraged in future releases to support bulk actions related to device/base station administrative operations.

This feature is also beneficial for ThingPark Enterprise customers, lifting a limitation present in previous releases regarding bulk import interaction with DX Dataflow.

Feature Activation

The new TWA architecture is automatically put in place with the upgrade to release 6.0.

The use of asynchronous APIs for device creation/deletion is optional , it is up to the API caller to choose between synchronous mode (existing behavior) and asynchronous mode (new behavior implemented by this feature) by using the right query parameter: Asynchronous mode is triggered by adding “async” query parameter to the request as follows:

  • To create a new device: POST /subscriptions/\/devices?async

  • To delete one device: DELETE /subscriptions/\/devices/\?async

  • For mass/bulk import of devices: POST /subscriptions/\/devices/import?async

Feature Limitations


This feature is only supported by ThingPark Wireless APIs, it is not supported by TPW GUI. Therefore, device creation/deletion and device mass operations triggered by the Device Manager GUI are still performed using the synchronous
mode.


Feature Interaction Matrix


LoRaWAN backend interfaces

Radio Interface

TWA User Interface

OSS API

NS-AS Interface

Billing / UDR

Alarms









Platform performance improvement (RDTP-7035 and RDTP-10696)

Feature Description

The following improvements are introduced in release 6.0 to enhance the overall performance of ThingPark Wireless sub-systems:

  • Slow Mongo query on BaseStationAlarm alarm 112 (RDTP-7035): Base Station alarm #112 refers to “Abnormal log activation” condition. When this alarm is triggered and the RAM-disk usage is true, instead of reevaluating the BS conditions every 5 minutes, the reevaluation is done only once to improve MongoDB performances.

NOTE: The default severity of this alarm (when RAM-disk usage is activated) is changed from Major to Minor starting release 6.0.

  • LRC reduce CPU consumption for some threads when there is no traffic (RDTP-10696): The purpose of this improvement is to minimize the CPU consumption of some LRC threads (lora, join, lrn) when there is no traffic.

Key Customer Benefits

The improvements presented above enhance the performance of TPW sub-systems and improve their scalability/response time.

Feature Activation

The improvements presented above are activated by default.

Feature Limitations

Not applicable.

Feature Interaction Matrix


LoRaWAN backend interfaces

Radio Interface

TWA User Interface

OSS API

NS-AS Interface

Billing / UDR

Alarms









MAC Efficiency and Performance Optimization Features

RDTP-8570: [Multicast] - Optimization of LRC retry mechanism for unavailable LRRs

Feature Description

To transmit downlink multicast frames, the LRC uses a retry mechanism to reattempt the transmission of a downlink frame that could not be transmitted on the first attempt.

The scope of this feature is to optimize this retry mechanism when some base stations are not available during the first transmission attempt.

Consider the following example:

  • LRR1: up and running, GPS-OK, Duty Cycle OK

  • LRR2: up and running, GPS-OK, Duty Cycle OK

  • LRR3: backhaul connection lost to LRC

  • LRR4: up and running, GPS-NOK, Duty Cycle OK

  • LRR5: up and running, GPS-OK, Duty Cycle NOK

Prior to release 6.0: Even if LRR3, LRR5 and LRR4 (for class B multicast only) are not eligible for the initial transmission attempt of the downlink frame, they were still reattempted on each subsequent retry in case their status has changed in the meantime.


Starting release 6.0: A new optional behavior is supported besides the existing behavior: LRR3, LRR5 and LRR4 (in case of class B multicast session) are excluded from any subsequent retry if they are not available during the first transmission attempt. This behavior assumes that there are limited chances to have a status change on these base stations between the first transmission
and the following retries (the interval between 2 consecutive retries is less than 256s) so it becomes suboptimal to increase the overall transmission time of the downlink frame because of these base stations.


The multicast summary report sent to AS includes the delivery status of all the LRRs in the initial list, i.e. LRR1…LRR5 in the example above.

NOTE: This process is re-initialized for every fresh downlink multicast frame, i.e. a non-eligible LRR for multicast frame “n” shall still be inspected for potential eligibility when multicast fame “n+1” is sent by the LRC.

For more details about downlink multicast, please refer to [6].

Key Customer Benefits

Thanks to this feature, the multicast transmission is enhanced to address an extended variety of use cases:

  • If the multicast session targets battery-powered devices and the main target is to optimize the session duration the new behavior can be used to avoid reattempting the transmission of the same downlink frame on base stations that not available during the initial transmission attempt.

  • Otherwise, if the multicast session targets class C devices that are mains-powered (no power budget constraint) and the main target is to maximize the downlink frame reception rate for end-device regardless of the session duration the old behavior can be used.

Feature Activation

The activation of the new behavior is controlled by the Application Server  (for instance, the RMC-Server for Actility’s FUOTA service) by setting a new boolean flag “RetryIneligibleGateways” in the query parameters of the downlink frame POST:

  • RetryIneligibleGateways = 0 The new approach is deactivated. This is the default value if the parameter is not present, for sake of backward compatibility with previous releases).

  • RetryIneligibleGateways = 1 The new approach is activated, hence the non-eligible base stations at the first transmission attempt are excluded from the LRC retransmission mechanism.

Feature Limitations

Not applicable.

Feature Interaction Matrix

LoRaWAN backend interfaces

Radio Interface

TWA User Interface

OSS API

NS-AS Interface

Billing / UDR

Alarms










RDTP- 2543: Support multiple RX2 / Pingslot channels per RFregion

Feature Description

In LoRaWAN specification, RX2 channel is used by all device classes to receive downlink frames; this receive window is opened 1 second after the RX1 window. Additionally, RX2 window remains open all the time for class C devices, allowing them to receive downlink frames at any time.

For class B devices, in addition to the conventional RX1/RX2 slots defined for class A, periodic downlink slots are available in class B: they are called pingslots, allowing the device to receive downlink frames without initiating a prior uplink transmission.

Prior to release 6.0: Only one RX2 can be defined for all the devices associated with a given RF Region. Also, for class B devices, only one  pingslot can be defined per RF Region. RX2 channel is always mapped to LC255 when reported to Application Servers (over the tunnel interface) or over LRC-OSS interface (towards TWA).

NOTE: The RX2 channel can be different from pingslot channel.

Starting release 6.0: it is possible to define several RX2 channels and/or several pingslots on the same RF Region. This means that the same LRR can serve devices having different RX2 and/or pingslot channels.


NOTE: Each device has only one RX2 channel i.e. the same device cannot use several RX2 (forbidden by LoRaWAN specification). Nevertheless, the same class B device may use several pingslots if pingslot frequency hopping is enabled (only
valid for US915, AU915 and China CN470 ISM bands).


Additionally, starting release 6.0, LC255 is not reported any more for RX2: the “real” Logical Channel (LC) used by the LRR is reported to AS (over tunnel interface) and over LRC-OSS interface. The only except to this rule in release 6.0 is the Join Accept that is still reported with LC255 when it is sent over RX2 channel. This limitation will be lifted in TPW release 7.0.

In case of mono-RX2 configuration, the LC associated with RX2 is derived as follows rules:


If <RX2Freq> does not match at least one <RxChannel> or <TxChannel>:

        
If LC127 is not already used by a<TxChannel>:
                  RX2 LC is LC127
        Else
                  RX2 LC is LC max + 1
Else
        RX2 LC is the highest LC among all <RxChannel> and <TxChannel> matching the <RX2Freq>.


When more than one RX2/pingslot channel are configured in the RF Region, the LRC assigns the target RX2/pingslot channel for each device according to its DevAddr . For instance, if 4 RX2 channels are defined in the RF Region, the LRC computes the modulo 4 of each DevAddr to choose the target channel for this DevAddr, then sends this target channel to the device via the MAC command RxParamSetupReq.

Please refer the “Feature Activation” section for more details about configuring multiple RX2/Pingslot channels in the RF Region.

The downlink transmission slot is reported to Application Servers in the Downlink Frame Sent report, via the new attribute “TransmissionSlot”: 

  • TransmissionSlot = 1 -> RX1

  • TransmissionSlot = 2 -> RX2

  • TransmissionSlot = 3 -> Pingslot (class B only).

NOTE: TransmissionSlot = 0 means unknown.

The Operator Manager application (both UI and API) is also enhanced by this feature to provide a synthesis of the Logical Channels configured in the RF Region. The Operator administrator can consult the Logical Channel description of each RF Region by viewing the RF Region in question and scrolling to the end of the page:



Network Manager impact:

To provide Network Manager users with consistent user experience after the introduction of multiple RX2/pingslot channels, some updates are brought to Network Manager application:

  • Like previous releases, the radio statistics (duty cycle, Spreading Factor and RSSI/SNR) are displayed for each logical  channel under RF Cell panel in Network Manager. Starting release 6.0, when the RF Region used by the LRR is provisioned in Operator Manager, the RF center frequency (in MHz) is displayed in Network Manager when the user moves the mouse over a given LC tab; as highlighted in the example below:


  • Additionally, the user can specify the group of RF Logical Channels that they want to display from the following list:


  • Uplink/downlink channels: correspond to symmetrical Logical Channels, used for both uplink reception and downlink RX1 transmission slots.

  • Asymmetric downlink channels: correspond to downlink-only channels, used for asymmetric channel plans (when downlink RX1 transmissions use a different frequency than the uplink channel) as well as downlink RX2, downlink pingslots and beacon signals.

Wireless Logger impact:

The background color of the “Channel” column on Wlogger determines the transmission slot (RX1/RX2/Pingslot) of each downlink frame (see example below):

  • While background -> RX1

  • Orange background -> RX2

  • Blue background -> Pingslot (for class B).

NOTE: RX2 is explicitly displayed on the “Channel” column only for Join Accept packets sent on RX2. For all other cases, the Logical Channel (LC) of the RX2 slot is displayed. In TPW 7.0, Join Accept packets will also show the real LC in the “Channel” column.



Interaction between multi-RX2/pingslot and multicast:

To configure the downlink multicast channel for class B/C devices, two options are supported by ThingPark in the current release:

  • Either the multicast frequency and data rate are directly provisioned in the Multicast Group This option is fully compatible with multi-RX2/pingslot configuration.

  • Or the multicast frequency and/or data rate are not provisioned in the Multicast Group, so they are inherited from the RF Region configuration This option is not compatible with multi-RX2/pingslot RF Regions since there is no unique RX2/pingslot channel! When this happens, the LRC returns a multicast transmission error to AS with the following error message:

    • For class B: “Multiple Ping Slot channels and no configured multicast frequency” (F1).

    • For class C: “Multiple RX2 channels and no configured multicast frequency” (F2).

Key Customer Benefits

This feature brings significant downlink radio capacity enhancement for class B/C use-cases: thanks to this feature, instead of using the same RF channel for all end-devices, several downlink channels could be defined for the same LRR. Using several downlink RF channels also mitigates collision, RF pollution and potential jamming risks on a single channel.

This feature is particularly interesting for US915 and AU915 deployments (eight downlink channels are supported) as well as Asian deployments (AS923, Korea KR920, India IN865, China CN470) where all the potential downlink channels have the same transmission limits (TxPower, duty cycle or dwell time if required by the local regulation).

While this feature is fully supported for all the LoRaWAN regional profiles (ISM bands), it currently has low interest for European LoRaWAN deployments with the current ETSI regulation of EU 863-870 MHz band: the downlink sweet spot is RF channel 869.525MHz (currently used for RX2 and ping slots) thanks to its higher Tx Power (EIRP = 29.15 dBm) and duty cycle (10%) compared to other downlink channels.

Feature Activation

This feature is deactivated by default.

To activate this feature, the RF Region configuration needs to be modified to define several RX2 and/or pingslot channels as per the following examples:

  • To configure multiple RX2 channels (for class A/B/C), all RX2 channels are defined by setting <UsedForRX2> to 1 in <TxChannel>  sections. The following example configures two RX2 channels are frequencies 922.6 and 922.8MHz:

<TxChannels>
    <TxChannel>
          <UsedForRX2>1</UsedForRX2> <!-- RX2 Tx Channel flag -->
          <LC>127</LC> <!-- RX2 Logical channel index -->
          <SB>3</SB>   <!-- RX2 Tx channel subband -->
          <DTC>10.0</DTC> <!-- RX2 Tx Channel duty cycle -->
          <Frequency>922.6</Frequency>           <!-- RX2 Tx Channel frequency -->
          <MaxDR>3</MaxDR>                                    <!-- RX2 Tx Channel data rate -->
    </TxChannel>

     <TxChannel>
          <UsedForRX2>1</UsedForRX2>
<!-- RX2 Tx Channel flag -->
          <LC>128</LC>       
<!-- RX2 Logical channel index -->
          <SB>3</SB>
<!-- RX2 Tx channel subband -->
          <DTC>10.0</DTC>
<!-- RX2 Tx Channel duty cycle -->
          <Frequency>922.8</Frequency>  
        <!-- RX2 Tx Channel frequency -->
          <MaxDR>3</MaxDR>
<!-- RX2 Tx Channel data rate -->
    </TxChannel>


</TxChannels>

  • To configure multiple pingslot channels (for class B), all pingslot channels are defined by setting <UsedForPingSlot> to 1 in <TxChannel> sections. The following example configures two pingslots at frequencies 923.2 and 923.4MHz:

<TxChannels>
      <TxChannel>
          <UsedForPingSlot>1</UsedForPingSlot>
        <!-- Ping Slot Tx Channel flag -->
          <LC>129</LC> 
<!-- Ping Slot Logical channel index -->
          <SB>3</SB>
<!-- Ping Slot Tx channel subband -->
          <DTC>10.0</DTC>
<!-- Ping Slot Tx Channel duty cycle -->
          <Frequency>923.2</Frequency>
        <!-- Ping Slot Tx Channel frequency -->
          <MaxDR>5</MaxDR>
        <!-- Ping Slot Tx Channel data rate -->
      </TxChannel>

      <TxChannel>
          <UsedForPingSlot>1</UsedForPingSlot>
        <!-- Ping Slot Tx Channel flag -->
          <LC>130</LC> 
<!-- Ping Slot Logical channel index -->
          <SB>3</SB>
  <!-- Ping Slot Tx channel subband -->
          <DTC>10.0</DTC>
  <!-- Ping Slot Tx Channel duty cycle -->
          <Frequency>923.4</Frequency>
        <!-- Ping Slot Tx Channel frequency -->
          <MaxDR>4</MaxDR>
          <!-- Ping Slot Tx Channel data rate -->
      </TxChannel>


</TxChannels>

NOTE: To maintain backward compatibility with RF Regions created in previous releases, the RX2 parameters for mono-RX2 configurations remain read from the attributes RX2DataRate, RX2TxPower, RX2Freq, RX2DTC, and RX2SB.

However, these attributes are not read if one (or more) Tx channel is configured with the flag <UsedForRX2> set to 1 in the RF Region. Hence, Actility does not recommend setting  <UsedForRX2>  flag in RF Regions having mono-RX2 configurations.

NOTE: To support this feature, the minimum LRR version is 2.4.12.

Feature Limitations

  • Multi-pingslot configuration is not compatible with pingslot frequency hopping (for US915, AU915 and CN470 ISM bands). Indeed, defining several pingslots becomes meaningless in presence of pingslot frequency hopping. Hence, it is not allowed to define pingslot Frequency = 0 (i.e. frequency hopping) in a multi-pingslot RF Region.

  • When using multi-RX2/pingslot configuration, the multicast frequency and data rate MUST be configured at Multicast Group level.

  • Join Accept messages sent on RX2 are still reported with LC255 in the current release (not the real LC); this limitation will be lifted in release 7.0.

Feature Interaction Matrix

LoRaWAN backend interfaces

Radio Interface

TWA User Interface

OSS API

NS-AS Interface

Billing / UDR

Alarms









RDTP-3466: [ADRv3]: Define distinct Initial_WindowSize parameters

Feature Description

To avoid volatile decisions, ADRv3 waits for (at least) N frames before sending an ADR decision (i.e. a LinkADRReq) to an end-user. This convergence phase allows the algorithm to receive enough uplink frames to take the right decisions and fulfill the target quality metrics: the uplink packet error rate and macro diversity overlapping for geolocated devices.

The ADRv3 convergence phase applies to the following cases:

  • When the end-device boots/reboots (for instance a Join Request for OTA device or a reset detection for ABP device).

  • Once the end-device acknowledges an ADR command (i.e. sending a LinkADRAns command in response to a LinkADRReq command from the LRC).

Prior to release 6.0: N is defined by the RF Region parameter “Initial_WindowSize”. The same parameter is used for all the convergence cases: device boot or between 2 ADR commands.

Starting release 6.0: To enhance configuration flexibility, the original “Initial_WindowSize” parameter is split into 3 parameters:

  • Initial_WindowSize_Boot (new RF Region parameter): this parameter defines how many distinct uplink frames should be received by ADRv3 before sending its first ADR command when the device boots/reboots.

  • Initial_WindowSize (existing parameter from previous releases): Once the device responds to an ADR command (i.e. sending a LinkADRAns MAC command), this parameter defines how many distinct uplink frames should be received by ADRv3 before sending a new ADR command (LinkADRReq) to boost the end-device link budget in case the quality targets are not met (bad uplink PER or insufficient macro diversity/coverage overlap).

  • Initial_WindowSize_Optim (new RF Region parameter): Once the device responds to an ADR command (i.e. sending a LinkADRAns MAC command), this parameter defines how many distinct uplink frames should be received by ADRv3 before sending a new ADR command (LinkADRReq) to optimize the end-device battery power when all the quality targets are satisfied (good uplink PER and sufficient macro diversity/coverage overlap).

Actility recommendation:

Initial_WindowSize_Optim > Initial_WindowSize > Initial_WindowSize_Boot.

This rule allows:

  • Faster convergence to higher data rates when the device boots/reboots: In most cases, end-devices choose the lowest data rate (and highest Tx Power) during initial network access. Reducing the window size during boot phase offers more reactivity of the algorithm.

  • High stability (less volatility) when optimizing the device’s battery consumption (airtime and Tx Power): To minimize the risk of ping-pong decisions while maintaining reactivity to any degradation of the uplink reception conditions, ADR decisions that are driven by boosting link budget should be triggered faster than ADR decisions driven by optimizing the device’s battery.

For more details about ADRv3, please refer to [6].

Key Customer Benefits

Thanks to this feature, the Operator have a better control on the level of reactivity vs. stability of the ADRv3 decisions. It offers finer tuning granularity of ADRv3 parameters to maximize reactivity to degrading radio conditions without jeopardizing the stability of the algorithm; i.e. speeding-up rescue decisions and slowing-down battery/airtime optimization decisions.

Feature Activation

This feature is activated by explicitly configuring Initial_WindowSize_Boot and/or Initial_WindowSize_Optim parameters in the RF Region configuration.

The feature can be deactivated at any time by simply removing Initial_WindowSize_Boot and Initial_WindowSize_Optim from the RF Region (or by setting them equal to Initial_WindowSize). In this case, the same window size is applied to all the stages of the ADRv3 algorithm.

Feature Limitations

Not applicable.

Feature Interaction Matrix

LoRaWAN backend interfaces

Radio Interface

TWA User Interface

OSS API

NS-AS Interface

Billing / UDR

Alarms









Operational Excellence Features

Support NetID migration procedure (RDTP-5320)

Feature Description

This feature allows ThingPark Operators to upgrade their NetID without running manual scripts.


When an Operator updates its NetID, the DevAddr of OTAA devices needs to be reallocated so that the NwkID part of the DevAddr matches the Operator’s new NetID.
Here is the DevAddr structure defined by LoRaWAN:




where M and N depend on the NetID type. For more information about the different NetID types, DevAddr structure and NwkID/NetID mapping, please refer to [2].

Accordingly, the NetID migration has an impact on the DevAddr: changing the NetID requires re-allocating the DevAddr so that NwkID matches the new NetID. This is important to support roaming since the NwkID part of the DevAddr allows the visiting network to derive the home NetID of the device.

However, since DevAddr is only allocated during the Join procedure for OTAA devices, there is a transition period between updating the NetID on ThingPark backend and updating the DevAddr of all active devices in the field. The completion of this transition period depends on when active devices will trigger an new Join procedure: this cannot be controlled by the network for LoRaWAN 1.0.x devices, so it is device implementation-dependent and whether some applicative commands are supported by the device to trigger an new Join procedure).

NOTE: Traffic segregation between Operators sharing the same LRC considers the new NetID even if some devices have not yet updated their DevAddr to the new NwkID/NetID.

To update NetID, please refer to the procedure described in the “Feature Activation” section.

Key Customer Benefits

This feature allows ThingPark Wireless Operators to safely migrate from one NetID to another which is needed in the following use cases:

  • An Operator upgrades its LoRaWAN membership status from adopter (NetID type 6) to contributor (NetID type 3) to grow its LoRaWAN business and support larger DevAddr range.

  • An Operator moves from experimental NetID 0 or 1 to their own NetID.

  • An Operator moves from Actility’s NetID to their own NetID.

Feature Activation

This feature is deactivated by default.

To activate it, the TPW administrator needs to perform the following tasks:

  1. Update the Operator’s NetID on TWA SQL database via the following TWA webservice:

           U         /systems/updates/operator/netID?operatorID=<operatorID>&netID=<netID> .

  2. A new Table O entry is created for the new NetID.


  3. Launch a full re-provisioning command to trigger TWA provisioning of the new NetID on the LRC databases. To execute this command, the following webservice shall be used: /systems/lrcFullProvisionningRequests. This API should be called twice to allow reprovisionning of devices and base stations. 
    NOTE: During the full provisioning, LRC provisioning queue is stopped.


  4. The previous Table O entry (previous NetID) is deleted.

Feature Limitations

  • ABP devices are out-of-scope of the NetID migration procedure since their DevAddr is personalized out-of-band (cannot be updated in-band via LoRaWAN MAC layer). Hence, if the DevAddr of these devices does not match the new operator NetID, they may not be able to roam-out to partner networks.

  • OTAA devices with manual DevAddr allocation are out-of-scope of this feature: the DevAddr is not impacted by NetID migration. Hence, if the DevAddr of these devices does not match the new operator NetID, they may not be able to roam-out to partner networks.

  • For OTA devices with automatic DevAddr allocation, the new DevAddr (matching the new NetID) is only assigned when the device sends a new Join Request. Therefore, active devices will continue to use the old DevAddr (with the old NwkID/NetID) until they request a new LoRaWAN session via Join Request. 

  • Only one NetID is supported per Operator in the current release.

Feature Interaction Matrix

LoRaWAN backend interfaces

Radio Interface

TWA User Interface

OSS API

NS-AS Interface

Billing / UDR

Alarms









Feature Description

For class A devices, the LRC queues up to 5 downlink frames for each device; waiting to send them as soon as a downlink scheduling opportunity is available in class A mode (i.e. when the device sends an uplink frame, thus opening RX1/RX2 reception windows to receive potential downlink frames).

Prior to release 6.0: The downlink frames queued by the LRC cluster are kept in RAM. This approach had two side-effects:

  • Queued frames are lost in case of LRC restart.

  • Queued frames are not synchronized between both LRCs of the LRC-cluster (i.e. LRC1/LRC2). Hence, in case of traffic switchover to LRC2, these queued frames at LRC1 cannot be processed by LRC2.

Starting release 6.0: Downlink frames queued by any LRC are stored on disk (in the device context, also known as LRC table-B) and synchronized with peer-LRC.

Key Customer Benefits

Thanks to this feature, downlink transmission is not jeopardized by LRC restart/switchover; leveraging ThingPark High Availability architecture to prevent downlink packet loss in such conditions.

Feature Activation

This feature is activated by default, it can be deactivated in lrc.ini via the parameter:

[features] FifoDnTabB (1 to activate, 0 to deactivate).

Feature Limitations

In case of backhaul instability, yielding the same uplink frame being received by 2 base stations connected to different LRCs (i.e. LRR1 connected to LRC1, LRR2 connected to LRC2), the downlink frame coming from AS might be sent twice to the device.

This limitation will be lifted in future releases via RDTP-11708.

Feature Interaction Matrix

LoRaWAN backend interfaces

Radio Interface

TWA User Interface

OSS API

NS-AS Interface

Billing / UDR

Alarms









RDTP-9070: Operator Dashboard support for SaaS operators sharing the same NetID

Feature Description

The ThingPark Operator Dashboard provides TPW Operators with an efficient tool to assess the Key Performance Indicators (KPI) aggregated at network level. This functionality is particularly interesting for performance monitoring and reporting.

Prior to release 6.0: The Operator Dashboard only supported for TPW Operators having their own dedicated NetID. Hence, TPW-SaaS Operators sharing Actility’s NetID (or using experimental NetID 0 or 1) cannot benefit from the Operator Dashboards.

Reason: Without this feature, the LRC deduces the Operator identity from their NetID.

Starting release 6.0: The Operator Dashboard is fully supported by SaaS Operators using a shared/experimental NetID. With this feature, the LRC does not rely on the NetID to derive the Operator identity: the Operator-ID is directly provisioned on the LRC by TWA backend.

For more details on the ThingPark Operator Dashboard, please refer to [7].

Key Customer Benefits

Thanks to this feature, Operator Dashboard becomes fully available for TPW SaaS operators that do not use a dedicated NetID (i.e. either sharing Actility’s NetID or using experimental NetID 0/1).

Feature Activation

This feature is automatically activated on TPW SaaS platforms after the migration to release 6.0: During the upgrade procedure, Actility NetOps perform full re-provisioning of devices and base stations by TWA into the LRC in order to provision the OperatorID on the LRC databases.


A TWA webservice is provided for this purpose: /systems/lrcFullProvisionningRequests. This API should be called twice to allow
full reprovisionning of devices and base stations.


Feature Limitations

Since this feature requires full reprovisionning of devices and base stations by TWA into the LRC, the Operator KPIs will be aggregated only after this action. Hence, for SaaS Operators sharing the same NetID, their KPI history before release 6.0 cannot be displayed on the Dashboard.

Feature Interaction Matrix

LoRaWAN backend interfaces

Radio Interface

TWA User Interface

OSS API

NS-AS Interface

Billing / UDR

Alarms








RDTP-8095: LRC Record/Replay tools

Feature Description

The scope of this functionality is to enrich the LRC non-regression test suite by defining a set of record/replay tools allowing Actility Operations department as well as ThingPark Wireless Operators to record live traffic for a limited number of devices during a few hours from the LRC.

Typically, recording of reference scenarii shall be performed on Production or Pre-Production platforms (TPW version n), whereas the replay of these reference scenarii shall be performed by Actility R&D teams on validation platforms for TPW versions n+1, n+2…etc.

The target is to verify that the device’s behavior (MAC layer, ADR commands, uplink routing towards AS/TWA, downlink processing and routing to the best-LRR…etc.) does not show any regression from one TPW release to another.

To manage this recording on Production/Preproduction platforms, the following functions are packaged in a script delivered on the LRC since release 6.0:

  • BACKUP: Backup the reference configuration of the scenario, for the device being recorded (device’s initial context, LRC tables, RF Region…etc.).

  • RECORD-IN: Record the reference INPUT traffic for the device being recorded (i.e. traffic coming from LRRs via IEC-104 backhaul interface, downlink frames coming from AS).

  • RECORD-OUT: Record the reference OUTPUT message flows related to the device being recorded (i.e. downlink frames sent to LRRs for this device, messages forwarded to AS/TWA for this device).

The output of the recording is packaged into a single archive containing backup, traffic-in and traffic-out data. The following diagram illustrates the recording mechanism:



NOTE: The recording tool exports the device keys in encrypted format  (encrypted by the LRC Cluster Key, in the same way they are locally stored in the LRC). LRC Cluster Key is used to encrypt the device’s root AppKey and session keys (NwkSKey, AppSKey).


This key is NEVER exported outside the LRC; to retrieve the keys of the device being recorded (mandatory information to allow replaying the device traffic on Actility’s platforms), another tool “recording-keys-update” is available to replace offline this cluster key with a new one dedicated to record/replay tool only. This tool shall be used by Actility’s Operations department, it takes
as input the original TGZ recorded in test platform + original LRC Cluster Key (manually retrieved by OPS team) and generates a new archive using the dedicated record/replay key.


This approach is implemented to ensure confidentiality of customer’s keys by:

  • Never exporting the LRC Cluster Key in any archive generated by the LRC.

  • Never storing the device’s keys in clear format in any Actility database.



Since it is not possible to record traffic for all devices, the Operator should focus on few device models (less than 10) that are the most represented in their network.

NOTE: Any scenario recorded by TPW Operators should be first validated by Actility before it is added to the reference tests’ library.

Recording pre-requisites:

  • The recorded device should not be activated on external JS, nor should it use HSM mode (see “Feature Limitations” section)

  • The lrc.ini parameter “JoinNonceType” should be set to 1 during recording. This requirement is important to allow replaying of a reference scenarii where an OTA device has issued a new Join Request (to avoid randomly allocated AppNonce/JoinNonce values in the Join Accept).

Please refer to the “Feature Limitations” section for a detailed description of the exclusions not supported in the current release.

Key Customer Benefits


Thanks to this feature, the LRC non-regression test coverage is significantly improved by taking some examples of real traffic conditions of key device models deployed in the field.
This feature contributes to the continual improvement of the LRC quality to avoid any potential regression in the following releases of the LRC sub-system.


Indeed, thanks to record/replay mechanisms, the real radio conditions of devices shall be covered in Actility’s non-regression test suite; this will also be the case for devices implementing a specific MAC behavior due to any potential grey areas in LoRaWAN specifications.

Feature Activation

This feature is deactivated by default.


To record a scenario for a given device, the following procedure shall be followed.
To start a new recording:


  1. cd /home/actility/lrc/com
  2. ./record-session.sh -s -m <my_comments> -g <tag> -d <deveui> -a <devaddr> -t <duration>

where:

  • -s option: start the recording

  • -m option: adds custom comment related to the recorded archive (optional)

  • -g option: adds a tag to be included in the archive name (optional)

  • -d option: DevEUI to be recorded (mandatory, lowercase)

  • -a option: DevAddr corresponding to the DevEUI (mandatory, lowercase)

  • -t option: recording duration in seconds (optional)

To stop a recording
  1. cd /home/actility/lrc/com
  2. ./record-session.sh -p -d \ -a \

A TGZ archive output will be generated and available in /var/ftp/ and the raw files are available under cd /home/actility/RecordReplay.

NOTE: It is not possible to record several devices at the same time, only one device at a time.

Feature Limitations

The implementation of this feature in release 6.0 has the following limitations:

  • Only one device is recorded at a time, it is not possible to record LoRaWAN sessions of several devices at the same time.

  • If the recording duration exceeds 2 hours, the recorded scenario shall be replayed by Actility in accelerated mode. The use of this accelerated mode has the following limitations:

    • The uplink/downlink traffic rate policy defined in the Connectivity Plan of the device being recorded cannot use “DROP” policy.

    • MAC commands should not be blocked during the recorded scenario, i.e. all the MAC commands should be positively acknowledged by the device.

  • The following configuration modes are not supported by Record/Replay tools in the current release:

    • OTA device activation on external JS.

    • OTA device using HSM mode.

    • Roaming devices.

    • Devices using Kafka in their AS Routing Profile (only HTTP routing profiles are supported in the current release).

    • LRC-LocSolver interface and the associated Location reports are not covered by Record/Replay in release 6.0.

Feature Interaction Matrix

LoRaWAN backend interfaces

Radio Interface

TWA User Interface

OSS API

NS-AS Interface

Billing / UDR

Alarms









LRC-AS tunnel interface enrichment (RDTP-7009, RDTP-8672, RDTP-6859, RDTP-6024, RDTP-3966, RDTP-8435 and RDTP-2210)

Feature Description


Several new features impacting the tunnel interface between ThingPark’s LRC core network and the Application Server (AS) are introduced in ThingPark release 6.0.
A brief description of those features and their benefit is presented below, please refer to [5] for more details about the LRC-AS tunnel interface in release 6.0.


  • [RDTP-7009]: Add Frequency to the Uplink Frame Report

    • Prior to release 6.0: the LoRaWAN channel used by the end-device to transmit the uplink frame is referenced by its Logical Channel (LC) ID, for instance LC1, LC2, LC3…etc. The physical RF frequency corresponding to this LC is not explicitly reported to AS.

    • Starting release 6.0: The RF center frequency of the uplink frame (expressed in MHz) is reported to AS – in the Uplink Frame Report (DevEUI_uplink) - besides the LC ID. Additionally, the RF frequency is also displayed in WLogger for each uplink frame.

  • [RDTP-8672]: Add Frequency to the Downlink Sent Report

    • This feature introduces the same enhancement as RDTP-7009 but for  downlink frames: Starting release 6.0, in addition to the LC ID, the RF center frequency of the downlink frame (expressed in MHz) is reported to AS in the Downlink Frame Sent Report (DevEUI_downlink_Sent for unicast frames and DevEUI_multicast_summary for multicast frames). The RF frequency is also displayed in WLogger for each downlink frame.
  • [RDTP-6859]: Correlate “Downlink Sent Reports” to downlink applicative frame requests requested by AS


    • Prior to release 6.0: The Downlink Frame Sent Report sent by the LRC to AS (used to report the transmission status of a downlink frame) does not include any explicit reference to the Downlink Frame request initiated by AS, except if the downlink frame counter is managed by AS (in case of end-to-end payload encryption mode enforced by HSM). Hence,
      if several downlink frames are queued by the LRC for a class A device, the AS needs to keep track of all those queued frames in order and map the Downlink Sent Reports to the currently queued frames based on First-In First-Out (FIFO) mechanism.


    • Starting release 6.0: To simplify the AS implementation, the AS may include a CorrelationID in the query parameters of the Downlink Frame sent to LRC. This CorrelationID is then resent by the LRC to AS in the Downlink Frame Sent indication to report the transmission status (DevEUI_downlink_Sent for unicast frames and DevEUI_multicast_summary for multicast frames).


Note: CorrelationID is a 64-bit hexadecimal string defined by AS. To maintain backward compatibility on LRC-AS tunnel interface, the new query parameter “CorrelationID” is optional.
Note: The choice of unique CorrelationID is under AS responsibility; the LRC does not verify the unicity of CorrelationID across the downlink frames queued for a given device.


  • [RDTP-6024]: Report reset/rejoin notification on LRC-AS interface

    • Prior to release 6.0: When an ABP/OTA device is reset - either a frame counter reset for ABP device (if authorized in the device profile) or a completed JOIN procedure for OTA device – the LRC does not send an explicit rese/rejoin notification to the Application Server (AS).

    • Starting release 6.0: An explicit reset/join indication is reported to AS in a new Frame type “DevEUI_notification”, in the following events:

      • ABP device reset detection (notification type = reset): If the uplink frame counter is reset and the flag “Allow ABP automatic reset” is activated in the Device Profile associated with this device.

      • OTA join procedure (notification type = join): if the OTA device has successfully completed an authenticated JOIN procedure (i.e. no replay attack detected during this JOIN procedure).

      • An administrative reset (notification type = reset) is applied for this device via the Device Manager application, i.e. via the “Reset Security Context” button in the UI, or via the API: POST /subscriptions/{subscription}/devices/{device}/admins/reset.

The reset/join notification is also displayed in WLogger at subscriber level: ↻ “DEVICE RESET REPORT”

  • [RDTP-3966]: AppSKey distribution from LRC to AS in HSM mode

    • Prior to release 6.0: In HSM mode, the AppSKey is only reported to AS alongside an application uplink payload. If the first uplink frames sent by the device after a successful Join procedure are MAC-only (i.e. not reported to AS), the distribution of the AppSKey to AS is delayed, which is sub-optimal for class B/C devices.

    • Starting release 6.0: In HSM mode, once the device completes a valid JOIN procedure, the AppSKey is immediately reported by the LRC to AS with the Join notification, via the “DevEUI_notification” frame. This immediate notification allows the AS to immediately schedule downlink frames for class B/C devices without waiting for the first uplink applicative frame.

Reminder: in HSM mode, the AS is responsible of encrypting the downlink frames using the AppSKey. This AppSKey is sent to AS via the LRC but it is encrypted by the AS Transport Key (a key encryption key only known to HSM and AS) so the LRC does not access the AppSKey in clear format.

  • [RDTP-8435]: Device’s battery level reporting to AS within MAC-only frame

    • Prior to release 6.0: The device feedback reported in the DevStatusAns MAC command (including the device’s battery status + downlink margin) is only reported to AS alongside an application uplink payload.

    • Starting release 6.0: Once the LRC receives the DevStatusAns MAC command from the device, it immediately reports the device battery and downlink margin to AS in a separate “DevEUI_notification” frame. This immediate reporting allows the AS to get real-time notification of the device’s battery status to be able to react accordingly (for instance, reconfiguring the device to reduce its uplink transmission frequency).

  • [RDTP-2210]: Report Device’s current LoRaWAN class + pingslot periodicity to AS/TWA

    • Prior to release 6.0: The LRC does not report to AS the pingslot periodicity of a class B device. The AS is also not notified when a device switches from class A to class B or vise-versa.

    • Starting release 6.0: In order to allow AS to adapt the scheduling of its downlink transmissions to the device, the LRC reports to AS the dynamic class of the device in the Uplink Frame Report (DevEUI_uplink). If the dynamic class is Class B, then the LRC also reports the current class B periodicity used by the device


Reminder: the device informs the LRC Network Server of its class B periodicity via the MAC command PingSlotInfoReq.
Both the current LoRaWAN class of the device and the class B periodicity (if the device is currently using Class B mode) are displayed in Device Manager and Wireless Logger applications .


Key Customer Benefits

Thanks to the features presented in the previous section, the LRC-AS interface is enriched to:

  • Report more metadata of the uplink/Downlink frames, by adding the RF frequency information besides the Logical Channel (LC) ID.

  • Allow easy correlation between the downlink frame transmission requests by AS and the corresponding transmission status (Downlink Sent Indication reports), with simplified processing on AS side.

  • Notify AS when the device performs a reset/join (RDTP-6024): this allows AS to react to this reset (if needed) by taking the appropriate decisions, for instance:

    • The AS may need to purge DL queue at LRC when the Class A device rejoins, if the queued frames are not relevant for the new session.

    • AS may need to send applicative configuration messages when the device reboots.

  • [HSM mode]: Allow AS to immediately schedule downlink frames for class B/C devices after a successful join procedure without waiting for an applicative uplink frame (RDTP-3966).

  • Real-time notification of the device’s battery status to AS without waiting to piggyback it with an uplink applicative frame (RDTP-8435).


  • Allow the AS to adapt the periodicity of its downlink transmissions according to the current LoRaWAN class used by the device and its listening periodicity. Indeed, a device provisioned in Class B mode may revert to class A for any reason (for instance by applicative configuration or due to lack of valid beacon signal); in such conditions the AS needs to adjust the
    downlink transmissions to match the class A listening periods over RX1/RX2 instead of class B.


Feature Activation

  • The reporting of uplink/downlink RF frequency (RDTP-7009/RDTP-8672) to AS is activated by default, it cannot be deactivated.

  • The activation of RDTP-6859 depends on the presence of the CorrelationID in the Downlink Frame request sent by AS. If this query parameter is not used by AS, RDTP-6859 becomes inactive (backward compatibility).

  • The DevEUI_notification frame sent by LRC to AS – used to notify a device reset (RDTP-6024), report AppSKey (RDTP-3966) or battery status (RDTP-8435) - is deactivated by default . It is activated by an LRC-wide parameter in lrc.ini:

[features].DeviceNotification:
0: device notification is fully disabled (default value)
1: device notification is enabled for TWA only
2: device notification is enabled for all devices/AS and for TWA
3: device notification is enabled for all devices/AS where  DevEuiNotification>=1 and for TWA ( reserved for future use ).

  • The reporting of the current LoRaWAN class of the device and the class B periodicity (RDTP-2210) is activated by default, it cannot be deactivated.

Feature Limitations

  • RDTP-6859: The CorrelationID unicity is not enforced by LRC, it is the AS responsibility to choose unambiguous CorrelationIDs for the different downlink frames of a given device.

Feature Interaction Matrix

LoRaWAN backend interfaces

Radio Interface

TWA User Interface

OSS API

NS-AS Interface

Billing / UDR

Alarms









RDTP-7280: LRC log enhancement

Feature Description

This feature brings several enhancements to the LRC logging functionality:

  • Trace levels: a new classification is introduced in release 6.0:
Category
Corresponding trace level
ERROR
0
WARNING
1
INFO
5
DETAIL
9
DEBUG
10 // reserved for R&D

NOTE: Trace levels 2, 3, 4, 6, 7, 8 are reserved for future use.

To configure LRC trace level in lrc.ini:

a. A global trace level can be configured by default for all LRC threads.


[trace]
     level=0 ; global level, used by default if level is not set for a specific thread.
    debug=1
    file=TRACE.log
    fshigh=90.0 ; max % of file system use to stop traces
    fslow=85.0 ; min % of file system use to restart traces


b. A different level can be set for each thread, if not set then the thread inherits from the global trace level.


NOTE: This feature also allows setting the trace level (globally or for a given thread) via command line interface (CLI).
For example, to investigate a specific beacon issue, the LRC administrator can increase the trace level of the lrc-lrr-beacon thread using the command:
       thtrace lrc-lrr-beacon 5.
To apply the same level for all threads, use thtrace * 0.


  • Prefixing of trace lines: A prefix is added between () in each line to indicate the corresponding LRC thread type (instead of the thread ID in previous releases) so that it becomes easier to track the logs related to a given feature/behavior.

Example:

16:05:49.400 (lrc-lora) [ADRv3.c:784] Distinct_Pkts_CNT=2 PER_LT=-1.000000  PER_ST=-1.000000
16:05:49.400 (lrc-lora) [ADRv3.c:827] ComputeOverlap_v12
16:05:49.400 (lrc-lora) [ADRv3.c:1830] instantPERoneLRR lrrid=290000fc  expected=2 cnt=1 instantPER=0.50

Lines are also labeled by specific features’ name such as ADRv3, NFR924…etc.
Additionally, the identity of the object concerned by the trace is added whenever relevant: deveui=xxxxxx or devaddr=yyyyyy for devices, lrrid=zzzzzzz for LRRs…etc.

Key Customer Benefits

This feature offers the following benefits to Operators having direct access to the LRC:

  • More comprehensive classification of the different trace levels, as well as high flexibility to configure trace levels differently for each LRC thread (e.g. to analyze specific problems).

  • Better readability of the trace lines to simplify analysis/debug.

  • Overall reduction of the troubleshooting effort/time.

Feature Activation

Not applicable.

Feature Limitations

The following points are out-of-scope of this feature:

  • Transfer of LRC logs to external servers.

  • Support of Syslog client on the LRC to enhance the management of internal logging, rotation and file cleanup.

Feature Interaction Matrix

LoRaWAN backend interfaces

Radio Interface

TWA User Interface

OSS API

NS-AS Interface

Billing / UDR

Alarms









Device alarm enhancement (RDTP-7010 and RDTP-7011)

Feature Description

The scope of this feature is to enhance the generation of some alarms related to adversary attacks:

  • [RDTP-7010]: Add AppEUI/JoinEUI to “Wrong MIC detected in Join request” - Device alarm # 008 and Base Station alarm # 114.

    • Prior to release 6.0: The alarm “Wrong MIC detected in Join request” generated at device level (alarm # 008) and Base Station level (alarm #114) only indicates the DevEUI but does not indicate the AppEUI/JoinEUI.

    • Starting release 6.0: When this alarm is raised, the AppEUI/JoinEUI included in the fake Join Request message is reported to the device and Base Station administrators.

Reminder: The alarm “Wrong MIC detected in Join request” is raised when ThingPark receives a Join Request from a device but cannot authenticate this request due to wrong Message Integrity Code (MIC). This could be either due to an adversary attack or a wrongly provisioned AppKey.

  • [RDTP-7011]: Generate “Wrong MIC detected in Uplink frame” alarm at device level.

    • Prior to release 6.0: The alarm “Wrong MIC detected in Uplink frame” is only generated at Base Station level (alarm # 117) because of potential ambiguity related to the mapping between a DevAddr and the corresponding DevEUI.

    • Starting release 6.0: The alarm “Wrong MIC detected in Uplink frame” is generated at device level (new alarm # 016) besides the BS alarm # To mitigate the DevAddr ambiguity issue, ThingPark implements the following logic:

      • If the DevAddr matches only one DevEUI provisioned under the Operator scope, the device alarm # 016 shall be raised on this DevEUI with default severity = major.

      • If the DevAddr matches more than one DevEUI provisioned under the Operator scope, the device alarm # 016 shall be raised on all those DevEUI with default severity = indeterminate (due to DevAddr collision/ambiguity).

Reminder: DevAddr collision is authorized in LoRaWAN, several devices can be associated with the same DevAddr but the DevAddr/NwkSKey pair must be unique within the network. When the LRC cannot authenticate the uplink frame due to wrong MIC, it becomes impossible for the LRC to precisely reveal the device identify in case of DevAddr collision.

Key Customer Benefits

This enhancement allows the subscriber to better monitoring the devices under their ownership, it also helps them troubleshoot adversary attacks targeting their devices.

Feature Activation

The above enhancement on alarms is activated by default, it cannot be deactivated.

Feature Limitations

Not applicable.

Feature Interaction Matrix


LoRaWAN backend interfaces



Radio Interface



TWA User Interface



OSS API



NS-AS Interface



Billing / UDR



Alarms
























RDTP-2217: Subscriber KPI Dashboard

Feature Description

This feature provides ThingPark Wireless subscribers with a new application “Dashboard” showing them Key Performance Indicators (KPI) related to their devices.

Wireless Subscriber Dashboard application is integrated as a SMP application. It must be packaged in an offer to be sold to subscribers. A subscriber will have access to the Dashboard through the ThingPark User Portal, as per the following view:



An administrator (for instance, the Customer Support team with Operator access) can access the Dashboard application through SMP impersonate on a subscriber having an active subscription on Subscriber Dashboard.

The Dashboard consists of eight widgets, the default widget configuration is shown by the following figure – please note that each end-user of the Subscriber role can access a dedicated Dashboard so they can customize the widget organization/preferences on their own:



The following KPIs are supported by the Subscriber Dashboard:

KPI name
Brief description
Display formats
Uplink/Downlink Traffic Volume
Cumulated number of uplink & downlink frames over the selected aggregation period.
Timeline
Top-10 devices
Downlink MAC Commands / Uplink Traffic Volume Ratio
Ratio of Downlink MAC commands with respect to the total number of uplink packets over the selected aggregation period. It is useful to assess the efficiency of the MAC-layer reconfiguration algorithms to avoid excessive downlink overhead.
Timeline
Top-10 devices
Uplink Spreading Factor Distribution
Distribution of uplink Spreading Factors over the selected aggregation period.
Timeline
Pie chart
Uplink Airtime Distribution
Time-on-air distribution of uplink packets (in millisecond) over the selected aggregation period.
Timeline
Pie chart
Uplink Macro Diversity Distribution
Distribution of uplink macro diversity (the number of base stations that have received each uplink packet) over the selected aggregation period.
Pie chart
Uplink Average Macro Diversity
Average uplink macro diversity (the number of base stations that have received each uplink packet) over the selected aggregation period.
Timeline
Top-10 devices
Uplink on Time / Late Distribution
Split of uplink packets between packets reported on-time and late packets (buffered by LRR), over the selected aggregation period.
Timeline
Pie chart
Downlink Success / Failed Distribution
Split of downlink packets between successfully sent transmissions and failed transmissions, over the selected aggregation period.
Timeline
Pie chart
Battery Level
Top (or worst, depending on user configuration) 10 devices in terms of battery level, over the aggregation period.
Top 10 devices
Device Split per Device Profile
Statistics related to the total number of devices associated with each device profile.
Pie chart
Device Split per Device Manufacturer
Statistics related to the total number of devices associated with each device manufacturer.
Pie chart
Device Split per Health State
Statistics related to the total number of devices per health state: Initialization, Active, Connection Error.
Timeline
Pie chart
Device Packet Error Rate (PER)
Weighted-average packet error rate of uplink packets, aggregated over all devices over the selected aggregation period.
Timeline
Top-10 devices
Average Airtime per UL Packet
Top (or worst, depending on user configuration) 10 devices in terms on their average airtime per uplink frame. Note: an uplink frame sent twice (NbTrans=2) has twice an effective airtime than the same uplink frame sent once (NbTrans=1).
Top-10 devices
TxPower Distribution
TxPower distribution (in dBm) of uplink packets over the selected aggregation period.
Pie chart
NbTrans Distribution
Distribution of number of uplink transmissions of each uplink MAC packet, over the selected aggregation period.
Pie chart
RX1 / PingSlot vs. RX2
Split of downlink packets between RX1 and RX2.
NOTE: Pingslot transmissions are counted with RX1 for this indicator.
Pie chart
Uplink Confirmed vs. Unconfirmed Packet Distribution
Split of uplink packets between confirmed and unconfirmed modes, over the selected aggregation period.NOTE: Pingslot transmissions are counted with RX1 for this indicator.
Pie chart


Key Customer Benefits

Thanks to this feature, ThingPark Wireless Subscribers can easily:

  • Monitor the health of their devices.

  • Follow the evolution of their main KPIs over time.

  • Identify devices having unacceptable performances (via the Top/Worst 10 statistics).

  • Have a comprehensive view on the distribution of their devices according to many criteria (split by manufacturer, by device model…etc.)

This Dashboard also provides TPW Operators (e.g. technical support teams) with the necessary tools to proactively diagnose performance issues for their key Subscribers so they can take the appropriate corrective actions.

Feature Activation

This feature is deactivated by default. To activate it, the Operator/Vendor must add the “Subscriber Dashboard” application to vendor offers to be available for TPW Subscribers.

This feature has the following pre-requisites:

  • TWA-Spark (based on Apache Spark) should be installed on the platform, this framework is required to perform KPI aggregation.

  • DC/OS should be setup on the platform, please see Section 4.4.9 for more details.

NOTE: Actility does not recommend activating this feature to all Subscribers, it is recommended to limit its usage in the current release to 100 subscribers (maximum 500K devices) to minimize the risk of facing platform scalability issues.

Feature Limitations

This feature has the following limitations:

  • Metrics are available in the Dashboard with a delay of:

    • 10 min for hourly device metrics.

    • 1h10min for device metrics aggregated on daily/weekly basis.

    • 1h10 min for hourly subscriber metrics.

    • 2h10min for subscriber metrics aggregated on daily/weekly basis.

  • KPI labels are provided only in en-US language, there is no maintenance script nor WS to add translation in another language. New translations can only be added by inserting in database.

  • KPI metadata are system-wide, so they cannot be customized per operator (for SaaS operators).

Feature Interaction Matrix


LoRaWAN backend interfaces



Radio Interface



TWA User Interface



OSS API



NS-AS Interface



Billing / UDR



Alarms































RDTP-3247-2201: High Availability and scalability for Spark KPI applications

Feature Description

This feature introduces the use of Data Center Operational System (DC/OS) to host TWA Spark KPI applications, adding high availability and horizontal scaling to the KPI framework.

Spark application is required for KPI dashboards, it is supported in ThingPark Wireless since release 4.3 to introduce the Operator KPI Dashboard. In the next release, this framework will be also used to offer a KPI dashboard for subscriber role.

Prior to release 6.0 , the Spark server was installed on a single server. Thus, a system administrator had to intervene to deal with the issue. Nevertheless, no data got lost in the process.

Release 6.0 supports fault-tolerance towards a KPI-side failure with a help of DC/OS that keeps the system (and thus KPIs display) operational even in case of a dysfunction.


DC/OS is a distributed system built upon Mesos. It provides a flexible infrastructure, microservices, containers and scalable big data components for modern applications. DC/OS meets the need of scalable systems to assume that the services might fail, kill and replace it with a new instance. It provides powerful tools to build a self-healing distributed system. High-level services
and applications as well as complex frameworks deployments such as Apache Spark or Apache Kafka are run on top of the DC/OS.


The level below DC/OS called Apache Mesos, is a “heart” of distributed systems, a basement on which other stack components run. It allows to abstract CPU, memory, storage in the Cloud, thus, being in charge of resources management and scheduling across entire cloud environments. It is widely applied by the businesses to build real-time systems.

The DC/OS can either be accessed via GUI (available from a master mode and providing a graphical view of a cluster) or CLI (command-line interface: a utility to manage cluster nodes, install and manage packages, inspect the cluster state, and manage services and tasks).

The two types of nodes form a DC/OS cluster:

  • Master nodes: manage the rest of the cluster by collaborating with each other

  • Agent nodes: nodes user tasks are run on 





As shown above, Master node manages Agent nodes as well as Executors and Schedulers which manage DC/OS tasks.

When running Apache Spark applications in charge of KPI generation, the following diagram is applied:



A Cluster Manager is used as a Mesos Master node, the Driver program represents a Mesos scheduler to dispatch Executor workloads on Agent Nodes (Mesos agents) according to resource offered by the Mesos Master node.

The Spark Driver itself run as a Mesos Task managed by Mesos’s Marathon component for streaming applications or by Mesos’s Metronome component for batch applications.

Key customer benefits

Thanks to DC/OS’s built-in high-availability and fault-tolerance, this feature allows hosted KPI applications to gain HA and horizontal scaling capabilities, thus, drastically diminishing the probability of system’s outage. The visible advantage of it is expressed through the absence of KPIs’ display holes in a customer-side GUI.

Feature Activation

The feature is activated by default.

Feature Limitations

Not applicable.

Feature Interaction Matrix


LoRaWAN backend interfaces



Radio Interface



TWA User Interface



OSS API



NS-AS Interface



Billing / UDR



Alarms































RDTP-4181: Add the monitoring of the AS server response (LRC part)

Feature Description

This feature introduces statistics on the health of the LRC-AS tunnel interface for HTTP Routing Profiles.

The following stats are aggregated by LRC every 5 minutes (default periodicity, configurable in lrc.ini) and reported for each HTTP type of Routing Profile:

Attribute
Description
ID
RoutingProfile-ID
LrcID
LRC ID
OpeID
Operator-ID, provided in routing profile provisioning.
OwnerID
Owner-Id of the routing profile. NOTE: this is not the subscriber Id of the device.
Time
Timestamp (ISO date/time) associated to the generation of this report.
Period
Period (in second) of aggregation of this report.
PeriodCount
Period counter since routing profile was loaded or modified.
Request
Number of requests (UL_Frame report, DL_Frame_Sent report, Multicast_Summary report, Location report…) towards this Routing Profile inside this reporting window.
OverloadDropped
Number of outgoing HTTP requests that where dropped (not sent to AS) inside this reporting window because no more handler was available (pending requests reached high mark).
BlackListDropped
Number of requests that where dropped because the routing profile was blacklisted inside this reporting window.
RequestError
For Routing Profiles configured in “Sequential” mode: Number of requests that could not be delivered because all destinations failed with a server error (Timeout or HTTP error).
For Routing Profiles configured in “Blast” mode (see RDTP-5790): Number of requests that could not be delivered because at least one destination failed with a server error (Timeout or HTTP error).
RequestOK
Number of requests that were delivered successfully to at least one destination (in sequential mode) or to all destinations (in Blast mode, see RDTP-5790). NOTE: if a Routing Profile is configured in Blast mode without activating RDTP-5790, the request is counted in RequestOK regardless of delivery status.
BlackListedDuration
Total number of seconds the routing profile was blacklisted during this reporting window.
OverloadDuration
Total number of seconds the routing profile was overloaded during this reporting window.
Descs/Idx
Index of the destination URL (provided for each destination included in the Routing Profile).
Descs/Desc
Destination URL for correlation purpose.
Descs/Request
Number of requests processed for this destination within this reporting window. It counts all the requests regardless of their status (OK, Error, Timeout…).
Descs/AvgRT
Average roundtrip in milliseconds (between the HTTP request and the HTTP response) for this destination, aggregated over this reporting window. Requests that fail due to timeout are excluded.
Descs/DevRT
Standard deviation of the average roundtrip in milliseconds (between the HTTP request and the HTTP response) for this destination in the associated reporting window. Requests that fail due to timeout are excluded.
Descs/MaxRT
Maximum roundtrip in milliseconds (between the HTTP request and the HTTP response) for this destination in the associated reporting window. Requests that fail due to timeout are excluded.
Descs/MaxRTTime
Timestamp (ISO date/time) of maximum roundtrip (between the HTTP request and the HTTP response) for this destination in the associated reporting window. Requests that fail due to timeout are excluded.
Descs/AvgTT
Average roundtrip in milliseconds (between the HTTP request and the end of processing) for this destination in the associated reporting window. All requests are considered in this aggregation, including the ones that fail due to timeout.
Descs/DevTT
Standard deviation of the average roundtrip in milliseconds (between the HTTP request and the end of processing) for this destination in the associated reporting window. All requests are considered in this aggregation, including the ones that fail due to timeout.
Descs/MaxTT
Maximum roundtrip in milliseconds (between the HTTP request and the end of processing) for this destination in the associated reporting window. All requests are considered in this aggregation, including the ones that fail due to timeout.
Descs/MaxTTTime
Timestamp (ISO date/time) of maximum roundtrip (between the HTTP request and the end of processing) for this destination in the associated reporting window. All requests are considered in this aggregation, including the ones that fail due to timeout.
Descs/TlsEstablish
Number of TLS establishment for this destination in the associated reporting window. Default: 0.
Descs/OK
Number of HTTP OK (2xx) for this destination in the associated reporting window. Default: 0.
Descs/Timeout
Number of HTTP Timeout for this destination in the associated reporting window. Default: 0.
Descs/Error
Number of HTTP ERROR (!= 2xx) for this destination in the associated reporting window. Default: 0.
Descs/Blast
Number of requests processed in BLAST mode for this destination in the associated reporting window (DELTA). Response status of requests is unknown in BLAST mode unless the behavior described by RDTP-5790 (see Section 4.2.2 ) is activated.

The stats described above are produced by LRC as a JSON document “AS_statistic” in a specific Kafka topic “OSS.AS.V1”, this topic has 50 partitions (partition key = routing-profile-ID) .


“AS_statistic”:{
        “ID”:”UPHTTP_AS_WAIT_SEQ”,
        “Time”:”2018-09-25T19:28:45.000+02:00”,
        “Period”:15,
        “PeriodCount”:4,
        “LrcID”:”00000000”,
        “OpeID”:”ope_id”,
        “OwnerID”:”owner_id”,
        “Request”:2,
        “RequestOk”:0,
        “RequestError”:2,
        “OverloadDropped”:0,
        “BlackListDropped”:0,
        “OverloadDuration”:0.000000,
        “BlackListDuration”:0.000000,
        “Descs”:[ // This section is provided for each destination of the routing profile
                {
                        “Idx”:0,
                        “Desc”:” http://trash:5555/lora?p=X “,
                        “Request”:2,
                        “Blast”: 2
                },
                {
                        “Idx”:2,
                        “Desc”:” http://localhost:6666/lora?p=1 “,
                        “Request”:1,
                        "Ok”:1,
                        “Timeout”:0,
                        “Error”:0,
                        “AvgRT”:3,
                        “DevRT”:0,
                        “MaxRT”:3,
                        “MaxRTTime”:”2018-09-25T19:28:48.000+02:00”,
                        “AvgTT”:3,
                        “DevTT”:0,
                        “MaxTT”:3,
                        “MaxTTTime”:”2018-09-25T19:28:48.000+02:00”
                },
                {
                        “Idx”:4,
                        “Desc”:” http://localhost:6666/lora?p=2 “,
                        “Request”:1,
                        “Ok”:1,
                        “Timeout”:0,
                        “Error”:0,
                        “AvgRT”:9,
                        “DevRT”:0,
                        “MaxRT”:9,
                        “MaxRTTime”:”2018-09-25T19:28:47.000+02:00”,
                        “AvgTT”:9,
                        “DevTT”:0,
                        “MaxTT”:9,
                        “MaxTTTime”:”2018-09-25T19:28:47.000+02:00”
                },
                {
                        “Idx”:6,
                        “Desc”:” http://trash:5555/lora “,
                        “Request”:2,
                        “Ok”:0,
                        “Timeout”:2,
                        “Error”:0,
                        “AvgTT”:49,
                        “DevTT”:12,
                        “MaxTT”:62,
                        “MaxTTTime”:”2018-09-25T19:28:47.000+02:00”
                },
                {
                        “Idx”:7,
                        “Desc”:” http://localhost:5555/lora “,
                        “Request”:2,
                        “Ok”:0,
                        “Timeout”:2,
                        “Error”:0,
                        “AvgTT”:0,
                        “DevTT”:0,
                        “MaxTT”:0,
                        “MaxTTTime”:”2018-09-25T19:28:48.000+02:00”
                }
        ]
}


Additionally, an AS_event JSON document is created by the LRC every time the AS status changes, for instance the AS switches from one state to another:

  • Nominal state (NominalReached): the Routing Profile is working properly, AS is responding in timely manner without overload/blacklist situation.

  • Overloaded state (OverloadReached): the Routing Profile is overloaded, i.e. all the handlers are currently occupied and LRC starts dropping new outgoing HTTP Posts until the Routing Profile goes back to nominal state.

  • Blacklisted state (BlacklistReached): When the overload state persists over time, the Routing Profile is eventually blacklisted, all outgoing HTTP posts are blocked during the blacklisting duration.

Example of the AS_event document:


{
        “AS_event”: {
                “ID”: “ROUTING1”,
                “Time”: “2014-04-01T14:19:56.38+02:00”,
                “LrcID”: “00000065”,
                “OpeID”: “actility-ope”,
                “OwnerID”: “199906950”,
                “Event”: “BlacklistReached”
        }
}


Key Customer Benefits

This feature provides ThingPark Wireless Operators with a comprehensive set of statistics to improve the debug/troubleshooting capability of the LRC-AS interface when some Application Servers show slow response to HTTP Posts from the LRC, yielding temporary packet drops due to blacklisting mechanism on LRC side.

Feature Activation

This feature is activated by default: after the upgrade to release 6.0, the LRC starts computing periodic statistics on each HTTP Routing Profile.

To deactivate this feature, the LRC administrator can set asstatperiod to 0 in custom lrc.ini.

The following parameters are configurable in lrc.ini related to this feature under [lrnoutput]:

  • asstatperiod : Aggregation duration of each report in seconds, default 300 (0 to deactivate the feature).


  • aseventmask : Report state changes (Nominal, Overloaded, Blacklisted).
    Concerns only AS_event reports. 0: report only Blacklisted event (default) 1: report all state change events.


  • askafkareport : Report stats per AS to KAFKA, 0 to deactivate, 1 to activate (default)

To retrieve these stats from the LRC, the following command line interface (CLI) shall be used:  xor <routing-profile-id>

Another way to retrieve these stats is to consume the Kafka topic OSS.AS.V1 on LRC-OSS interface.

Feature Limitations

  • In the current release, the statistics introduced by this feature are produced by the LRC but not processed by TWA Device Manager / Subscriber KPI Dashboard applications.

  • The statistics available at HTTP-destination level are aggregated separately by the LRC for each routing profile; hence, if the same HTTP destination is configured across different Routing Profiles, the related stats are not aggregated across all the corresponding Routing Profile. 

Feature Interaction Matrix

LoRaWAN backend interfaces

Radio Interface

TWA User Interface

OSS API

NS-AS Interface

Billing / UDR

Alarms








RDTP-2488: Read-only/viewer mode for all role types

Feature Description

This feature provides ThingPark Wireless administrators and end-users with enhanced access permission options.

Prior to release 6.0:

  • Use-case #1: All end-users created under the Subscriber role have full read/write access to all the subscribed applications (Device Manager, Network Manager, Key Manager for JS).

  • Use-case #2: ThingPark Wireless administrators (i.e. Operator administrators, Network Provider administrators, Connectivity Supplier administrators, Vendor administrator, Supplier administrators) have either full-administrator access or to specific security groups. However, there is no read-only mode to allow an administrator to access all the applications/tabs without write access.

  • Use-case #3: A vendor administrator that is member of “Subscriber Managers” group cannot create subscribers, only full-administrators can do.

Starting release 6.0:

  • Use-case #1: End-users can have one of the following permission profiles:

    • Full administrator with read/write access to all applications. This is the pre-release 6.0 mode; all the existing end-users are automatically migrated to this profile after TPW upgrade to release 6.0.

    • Read-only mode for all applications: in this mode, the end-user has access to all applications but cannot make any administrative action such as create/edit/delete/move/tag…etc. The related buttons are either hidden or greyed in the User Interface.

    • Hybrid profile: intermediate mode between the 2 profiles mentioned above, where the end-user can be granted write access to one of the available applications (e.g. Device Manager) but remains in read-only mode for other applications (e.g. Network Manager).

The following figure illustrates the different security groups available for a typical end-user subscribed to Device Manager and Network Manager applications:



NOTE: During the creation of a new application, a new field is supported to indicate whether it requires write access or not; as illustrated by the following figure:



  • Use-case #2: Besides the existing security groups, a new group called “Viewers” is supported by this feature. Administrators assigned to this group have read-only access to all the applications/tabs available by the corresponding role. They can also impersonate underlying roles if applicable, but still in read-only mode.

To make it more comprehensive, the organization of the security groups has evolved in release 6.0 to support two modes:

  • Basic mode: offering only two groups: Administrators (i.e. full-admin rights with read/write access to all applications/tabs) and Viewers (read-only profile, newly introduced by this feature).

  • Advanced mode: while in basic mode, only Administrators and Viewers groups are proposed; in advanced mode, all groups are proposed.


NOTE: Custom profiles are also possible here, for instance, an Operator administrator may be member of “Viewers” + “Subscriber managers” groups; in which case he would have read-only mode to Operator Manager application but full read/write access to manage Subscribers.

  • Use-case #3: Vendor administrators that are members of “Subscriber Managers” group can create subscribers even if they don’t have full-admin rights.

Key Customer Benefits

This feature enhances the permission management rules for all ThingPark Wireless roles. Read-only profile allows safer distribution of TPW accounts to end-user/administrator to ensure better control of administrative actions taken on the platform and allow a better segregation of the different profiles accessing its different applications.

The detailed benefits are explained above for each use-case described.

Feature Activation

This feature is available for all ThingPark wireless administrators/end-users once the platform is upgraded to release 6.0.

Permission profiles for existing end-users/administrators are not impacted by the upgrade to the new release. It is up to the Operator/Subscriber to switch relevant end-users to read-only mode (or viewers for administrators) at their own convenience.

Feature Limitations

Activation of the “write access supported” option for existing applications is not supported.

Feature Interaction Matrix

LoRaWAN backend interfaces

Radio Interface

TWA User Interface

OSS API

NS-AS Interface

Billing / UDR

Alarms









RDTP-5787: Improve WLogger display when DL frames are sent for repeated UL frames

Feature Description

When the same uplink frame is transmitted several times by the device (i.e. frame repetition), only one copy of this uplink frame is visible in WLogger and sent to AS thanks to the LRC deduplication mechanism. However, since each individual uplink transmission by the device allows two downlink scheduling slots (RX1 and RX2), the LRC may send downlink frames on RX1/RX2 slots of a repeated uplink frame.

Prior to release 6.0: When the situation described above occurs, the Wireless Logger user sees downlink frames without the corresponding uplink frames; which sounds wrong for a class A device.

Starting release 6.0: To avoid any ambiguity in Wireless Logger, a new display scheme is introduced in this situation to notify the user that the downlink frame in question is sent in response to an uplink frame that is masked by deduplication – see the following capture:


  • A new icon is used to display the downlink frame: 


  • In the expandable panel, the text “generated for a repeated UL” is added besides the LoRaWAN MType.

Key Customer Benefits

Thanks to this feature, the user experience is more consistent; avoiding any ambiguity in Wireless Logger display.

Feature Activation

This feature is activated by default, it cannot be deactivated.

Feature Limitations

Not applicable.

Feature Interaction Matrix

LoRaWAN backend interfaces

Radio Interface

TWA User Interface

OSS API

NS-AS Interface

Billing / UDR

Alarms








RDTP-2467: Spectrum Analysis parameters should be configurable in Network Manager

Feature Description

This feature improves the configuration of the RF spectrum scan utility to provide higher flexibility to Network Partners or Subscribers using Network Manager application.

Prior to release 6.0: The spectrum scan functionality supported by Network Manager does not allow configuring the start/stop frequencies nor the frequency step used during the measurement. All the configuration parameters of the RF spectrum scan utility are inherited from the ISM band configured on the LRR as per the following table:

ISM Band (Regional Profile)
Start Frequency
Stop Frequency
Step
EU868
863.1 MHz
869.9 MHz
100 KHz
US915
902.3 MHz
914.9 MHz
200 KHz
AU915
915.2 MHz
927.8 MHz
200 KHz
AS923
915.2 MHz
927.8 MHz
100 KHz
KR920
917.1 MHz
923.5 MHz
100 KHz
IN865
865 MHz
867 MHz
12.5 KHz
CN470
470.3 MHz
489.3 MHz
200 KHz
CN779
779.1 MHz
786.9 MHz
100 KHz
EU433
433.1 MHz
434.675 MHz
25 KHz

Starting release 6.0: The Network Partner/Subscriber can customize the start/stop frequencies as well as the frequency step via Network Manager application.

From the Base station Administration panel, when the user launches a RF spectrum scan (by clicking the “Scan the radio” button, a new window is displayed to allow the user to customer the scan parameters: 






By default, if these parameters are not customized, the settings presented in the table above remain applicable.

Additionally, the user may use a different ISM Band than the one currently assigned to the Base Station (inherited from the RF Region configuration): this could be useful for Base Station hardware models that are compatible with several ISM Band (for instance, Asian gateways can scan both AS923 and Australian AU915 bands).

WARNING: Choosing frequency ranges that are not compatible with the Base Station hardware capabilities could yield  performance instabilities and shall result in RF scan failure.

Key Customer Benefits

This feature brings additional flexibility to Network Partners/Subscribers doing RF spectrum scan on their base stations using Network Manager application.

RF spectrum scan utility is very useful to assess the background interference/noise level perceived by the Base Stations and troubleshoot uplink radio performance issues. This utility is also crucial in the determination of the optimum RF channel plan for a given geographical region.

Feature Activation

This feature is automatically available when the platform is upgraded to release 6.0.

Feature Limitations

The maximum number of steps allowed for an RF spectrum scan is 200 steps (systemwide configuration).

Feature Interaction Matrix

LoRaWAN backend interfaces

Radio Interface

TWA User Interface

OSS API

NS-AS Interface

Billing / UDR

Alarms








RDTP-2277: Customization of alarm parameters at operator level

Feature Description

This feature allows ThingPark Wireless SaaS Operators to personalize alarm settings differently from the default systemwide configuration.

Prior to release 6.0: Alarm settings (i.e. severity levels and trigger thresholds) can be customized system-wide (i.e. same settings apply to all customers sharing a SaaS platform).

Starting release 6.0: Customization of alarm settings becomes per-operator, allowing Operators using ThingPark Wireless in SaaS model to personalize their alarm configuration independently. Additionally, alarm activation/deactivation can be also controlled by the Operator.

NOTE: The customization of alarm settings is not supported via APIs, it is performed by a TWA script run by ThingPark network administrators (Actility’s Network Operations team for SaaS platforms).

For more information about ThingPark Base Station and Device alarms, please refer to [8].

Key Customer Benefits

This feature is particularly useful for SaaS Operators, it provides them with full flexibility to customize the settings of Device/Base Station alarms according to their strategy.

For instance, an Operator may decide to deactivate an alarm that is not relevant for their deployment, whereas another Operator may decide to increase the trigger severity of some other alarms to improve its awareness of service-impacting problems.

Feature Activation


By default, the system-wide configuration of alarm settings is inherited by all Operators.
To personalize the alarm settings for an Operator, the network Administrator shall use the script  /home/actility/twa-admin-scripts/alarm-customization.sh  on TWA_AS:



Feature Limitations

  • The customization of alarm settings per operator is performed using a script run by the ThingPark network administrator (Actility’s Network Operations team for SaaS model). The integration of such customization in Operator Manager API and GUI is out-of-scope of this feature.

  • The periods defining the minimum sustained duration to clear an active alarm remain configurable system-wide (not operator-wide). This limitation applies to the following alarm types:

    • Device alarms: 002, 003, 007, 008, 009, 010, 011, 013, 014, 015 and 016

    • Base Station alarms: 101, 113, 114, 115, 116, 117, 118, 119 and 120.

Feature Interaction Matrix

LoRaWAN backend interfaces

Radio Interface

TWA User Interface

OSS API

NS-AS Interface

Billing / UDR

Alarms









RDTP-2174: Global LRR configuration enhancements (NFR-920)

Feature Description

This feature brings the following enhancements to ThingPark Network Partners and Subscribers having access to Network Manager:

1/ Enhancement of the remote LRR configuration UI accessible via reverse-SSH on the Base Station (also known as SUPLOG):

  • Without this feature: SUPLOG UI supports limited functionalities:

    • View logs

    • LRR configuration: Get the current radio configuration (number of antennas, ISM band, LBT configuration, TX Power look-up tables (LUT), physical channel configuration…etc.), get LRR-UUID, set Transmit Power and Power transmission adjustment (i.e. antenna gain and cable loss).

    • Network: View ethernet or cellular networks and DNS address.

    • PKI configuration (IPSec/TLS)

    • Troubleshooting: ping, view disk usage, view real time system metrics, view iptables, view software versions, view status of each network interface…etc.

Accordingly, it was not possible to change network/backhaul configurations via the Support account used on SUPLOG; this was only possible via root access to the base station, or by embedding all the required configuration settings in the BS image.

  • With the new feature: the following actions are added besides existing ones:

    • Network configuration

      • Network interface configuration: For each interface, view/change:

        • Name (e.g. eth0) [VIEW only]

        • Type (Ethernet | Cellular | Wi-Fi) [VIEW only]

        • State (UP | DOWN) [VIEW only]

        • DHCP activation [CHANGE/VIEW]

        • DHCP lease obtained [VIEW] (only when DHCP is enabled)

        • DHCP lease expire [VIEW] (only when DHCP is enable)

        • VLAN [CHANGE/VIEW] (only valid when DHCP is disabled)

        • IP address [CHANGE/VIEW] (only valid when DHCP is disabled)

        • Netmask [CHANGE/VIEW] (only valid when DHCP is disabled)

        • Broadcast [CHANGE/VIEW] (only valid when DHCP is disabled)

        • Gateway [CHANGE/VIEW] (only valid when DHCP is disabled)

        • DNS servers [CHANGE/VIEW] (only valid when DHCP is disabled)

      • Cellular interface configuration:

        • APN auto-configuration [CHANGE/VIEW]: When activated, the BS is expected to retrieve the MCC (Mobile Country Code) and MNC (Mobile Network Code) from the SIM card, and map the MCC/MMC pair to the right APN  configuration based on a local mapping database included in the BS image.

        • APN [CHANGE/VIEW]

        • PIN [CHANGE/VIEW]

        • USER [CHANGE/VIEW]

        • PASSWORD [CHANGE/VIEW]

      • WiFi interface configuration:

        • SSID [CHANGE/VIEW]

        • SECURITY [CHANGE/VIEW]

        • KEY [CHANGE]

      • NTP configuration: view and add NTP servers (up to 3)

      • Time zone configuration

    • IP Failover configuration (also known as interface failover, this mechanism is used to switch to secondary network interface when the primary interface is down):

      • Activation [VIEW only]

      • Local SSH port [CHANGE/VIEW]

      • Network interface configuration:

        • Name [VIEW only]

        • Status (Primary | Secondary | Not Used) [CHANGE/VIEW] (to deactivate an interface, set it to “Not Used”).

        • Bandwidth constraint (Very Limited | Limited | No) [CHANGE/VIEW]

        • ICMP addresses to use to validate the state of the interface [VIEW only]

    • PKI configuration (IPSec/TLS):

      • Activation (start/stop) [CHANGE/VIEW]

      • Key installer Platform [CHANGE/VIEW]

      • Key installer instance [CHANGE/VIEW]

      • Key installer server [CHANGE/VIEW]

      • Cleanup certificates [ACTION]

    • Backhaul configuration:

      • LRC servers: List of IPv4 addresses or FQDN (up to 2) [CHANGE/VIEW]

      • FTP download servers: List of IPv4 addresses or FQDN (up to 2) [CHANGE/VIEW]

      • FTP upload servers: List of IPv4 addresses or FQDN (up to 2) [CHANGE/VIEW]

      • Reverse SSH servers: List of IPv4 addresses or FQDN (up to 2) [CHANGE/VIEW]

      • ICMP addressees (see RDTP-4176): List of IPv4 addresses to use to validate the state of the interface [VIEW only]

    • Enhanced troubleshooting options , for both network and IPSec/TLS.

2/ Enhanced mechanism to allow configuration changes:

  • Without this feature: any configuration changes are directly committed on the Base Station.

  • With this new feature: to ensure safer transition from existing to new configuration, a new mechanism is supported; it is based on apply/commit/rollback actions:

    • Apply : Applies all changes done in the network configuration menu. Because, with this action, modifications are not committed, the administrator should come back (for instance after the BS reboot) in SUPLOG to commit modifications, or rollback modifications.

    • Commit (available when the modification status is APPLIED): Commits all changes done in the network configuration menu.

    • Rollback (available when the modification status is APPLIED): Restore the backup configuration (that gets the COMMITTED status).

    • Cancel : Discard all the modification in progress.

The following diagram illustrates the different modification states and their transitions



This new mechanism avoiding losing the connection of a Base Station to ThingPark core network in case of configuration errors: if an applied configuration is not committed after a configurable delay (15 minutes by default), then the Base Station will automatically rollback to the last committed configuration.

3/ The backup/restore of Base Station configuration is enhanced by the new feature, covering backup of LRR software binaries and scripts + LRR configurations + some Linux configuration (such as NTP configuration, network interface configuration).

NOTE: The backup/restore functionality supported by this feature does not cover full system backup, it essentially covers the LRR package + configuration that allow recovering from a bugged LRR software version or a wrong configuration.

Additionally, the SUPLOG menu is reorganized to provide users with a more comprehensive UI.

Key Customer Benefits


This feature brings operational gains to ThingPark Base Station managers: Thanks to this feature, ThingPark Operators and Network Partner/Subscribers managing Base Stations can autonomously configure network, backhaul, security parameters
on their gateways via a User Interface using their Support account, without needing root access.


As explained in the previous section, this feature also brings significant enrichment of the troubleshooting information provided to BS managers. It also provides a homogenous framework for Base Station backup/restore functionality, whatever the BS model.

Finally, this feature ensures safer configuration updates through the apply/commit/rollback mechanisms, minimizing the risk of losing access to the BS in case of misconfiguration.

Feature Activation

This feature is deactivated by default, it is activated via a specific flag in lrr.ini:


[suplog]
nfr920=1


This feature is only supported from LRR version 2.4.92 onwards.

Feature Limitations

The implementation of this feature has the following limitations:

  • The configuration of etc/host file and associated routes are only accessible in read-only mode from SUPLOG and LRR CLI interface.

  • All logins and passwords (Prestager ID/password, Reverse SSH login/password, LRC FTP login/password, Support FTP login/password, and BS SSH login/password) are pre-defined in the BS image and cannot be updated via SUPLOG. Hence, these credentials remain defined per-operator (cannot be customized per base station).

  • Since the backup/restore mechanisms evolved with this feature, old backup run before activating this feature cannot be used after activation. It is then recommended to run a new backup after activating this feature on the Base Station.

Feature Interaction Matrix

LoRaWAN backend interfaces

Radio Interface

TWA User Interface

OSS API

NS-AS Interface

Billing / UDR

Alarms









Comprehensive API-only mode for end-users and administrators (RDTP-10108 and RDTP-6589)

Feature Description

The scope of this enhancement is to provide a comprehensive view of API-only mode for both administrators and end-users.

  • For Administrators: in release 6.0, API-only administrators can be recognized via “API only” flag in the Administrator details on SMP GUI. Additionally, the “Forgot password” option is removed for such administrators since they have permanent password. 
    NOTE: API-only administrators cannot access ThingPark UI, this behavior has not changed from previous releases.
  • For End-users: An end-user can be created with a permanent password (that never expires) using SMP UI (not possible via User Portal UI). This functionality was already supported before release 6.0 but it was wrongly named “API-only”. Starting release 6.0, the related feature flag is renamed “Permanent password” in the UI.


End-users with permanent passwords can be easily recognized on SMP UI via the new flag “Permanent password” added in the end-user details in release 6.0.

NOTE: An end-user using permanent password can access User Portal UI, like classic end-users; this behavior has not changed from previous releases.

Key Customer Benefits

This feature allows easy recognition of end-users having permanent password or API-only administrators. It also offers a more comprehensive user experience by avoiding misleading naming for end-users.

Feature Activation

Not applicable.

Feature Limitations

Not applicable.

Feature Interaction Matrix

LoRaWAN backend interfaces

Radio Interface

TWA User Interface

OSS API

NS-AS Interface

Billing / UDR

Alarms








OAuth V2 for end-user roles: Step-1 (RDTP-2443) and Step-2 (RDTP-6887)

Feature Description

The scope of this development is to implement the first steps of an EPIC targeting the use of standard OAuth V2 / OpenID Connect authentication flows to authenticate end-users and applications in ThingPark.

  • The OAuth 2.0 authorization framework enables a third-party application to obtain limited access to an HTTP service.

  • OpenID Connect 1.0 is a simple identity layer on top of the OAuth 2.0 protocol. It allows Clients to verify the identity of the End-User based on the authentication performed by an Authorization Server, as well as to obtain basic profile information about the End-User in an interoperable and REST-like manner.

The goal of these two steps is to migrate end-users from SMP to Keycloak and rely on Keycloak to authenticate end-users instead of SMP API.

No functional enhancement is introduced in release 6.0 (next steps are planned in TPW release 7.0). Nevertheless, the login UI on User-Portal is slightly modified with the Keycloak integration.



The end-user account activation window is also slightly modified: On first user connection, the user is asked to update his profile with following form:



Then, if user has not yet defined any password, the “update password” action will be presented with the following form.



For more details about end-user login changes in release 6.0, please refer to [3].

Key Customer Benefits

Steps #1 and #2 presented above are the preliminary steps of a full EPIC to integrate KeyCloak with ThingPark Wireless. This EPIC has the following objectives:

  • Enhance security by using standardized authentication flow.

  • Leverage Keycloak as an identity provider to enhance security by using a dedicated / well tested authentication solution.

  • Simplify third-party application integration with ThingPark.

  • Allow authentication delegation to external identity provider (Keycloak):

    • Unified authentication on different ThingPark platforms/solutions

    • Delegate authentication for an operator.

  • True single sign-on (SSO).

Feature Activation

This feature is activated once the platform is upgraded to release 6.0.

Keycloak is deployed on a new Virtual Machine named AS_IDP in Orange zone. Keycloak High-Availability is based on a cluster composed of at least two nodes (nodes must be deployed in different failure zones).

Feature Limitations

  • All the links included in the Account Activation / Password Reset emails sent by ThingPark before the upgrade to release 6.0 will expire after the upgrade. If the activation/reset has not be executed before the platform upgrade, the user will get an error and will need to re-initiate again the activation/reset request (same error as the one related to the link expiration).

  • When the end-user or the administrator triggers a password reset action (i.e. “lost password” by the end-user or “reset password by administrator), the email sent is not customized for each operator (using SMP templates) – the same email content is used for all operators/platforms. To customize these elements, a Keycloak subtheme must be deployed per  operator.

Feature Interaction Matrix

LoRaWAN backend interfaces

Radio Interface

TWA User Interface

OSS API

NS-AS Interface

Billing / UDR

Alarms








Geolocation Features

RDTP-2200: Report GPS coordinates of the base station to LocSolver if antenna coordinates are not manually provisioned

Feature Description

The exact position of the base station / RF antennas is a crucial input to the network geolocation solver.

Prior to release 6.0: Only the antenna coordinates – manually provisioned by the Network Partner of the base station – are sent to the LocSolver and used in geolocation algorithms.

Hence, if antenna location is not manually provisioned, the geolocation service becomes unavailable.

Starting release 6.0: With RDTP-2200, the antenna location used by the LocSolver is retrieved as follows:

  • If the antenna location is manually set by the Network Partner, it is directly used by the LocSolver. This rule assumes that the RF antenna location may be different from the base station location when RF antennas are spaced away from the BS/GPS antenna depending on the site installation setup (rooftop or tower-mounted).

  • Else (if antenna location is not available), the LocSolver uses the base station location:

    • If the BS location is manually set (i.e. administrative location), then it is used by LocSolver for all the antennas of the base station.

    • Otherwise (BS location is set to Auto (GPS) mode), then the average/filtered location reported by the GPS receiver is used by LocSolver for all the antennas of the base station.

The precedence rules are summarized in the following table:

Antenna location
 (manually provisioned)

BS location mode

GPS status

Antenna coordinates used by LocSolver

Yes

Any

Any

Manual antenna location

No

Manual

Any

Manual BS location

No

Auto (GPS)

UP

GPS BS location

No

Auto (GPS)

DOWN

No location


To leverage the GPS position reported by the base station, the LRR applies a filtering algorithm to exclude bad GPS fixes and minimize swings.

The algorithm is designed to address the following requirements:

  • Minimize fluctuations between GPS positions returned by the GPS receiver (either during GPS power-up phase, GPS recovery following an outage, fluctuating GPS signal…etc.) provide stable GPS position to LRC/LocSolver.

  • Minimize impact on base station CPU load.

  • Minimize LRR-LRC backhaul overhead due to reporting of BS position.

Key Customer Benefits

Thanks to this feature, network geolocation becomes possible even when the Operator/Network Partner does not manually provision the antenna position of their base stations.

Hence, this feature offers enhanced operability to ThingPark Wireless Operators, it also enhances the geolocation service availability.

Feature Activation


This feature is activated by default starting LRR version 2.4.86, via the following flags set in lrr.ini:
 
[lrr]
 usegpsposition=1


[gpsposition]

histosize=120


Feature Limitations

If the base station coordinates reported by the GPS receiver are too different from the antenna coordinates manually provisioned by the Network Partner, the latter coordinates are still used by the LocSolver without triggering a backend alarm to notify the Network Partner of potential provisioning error of the antenna coordinates.

Feature Interaction Matrix

LoRaWAN backend interfaces

Radio Interface

TWA User Interface

OSS API

NS-AS Interface

Billing / UDR

Alarms








List of User Stories included but not validated

Not applicable.

Issues Resolved for Customer Project/Customer Support

This section lists customers issues that are resolved in ThingPark 6.0:

ID
Summary
RDTP-9699
User Action log is not complete
RDTP-9447
LRC sends NewChannelReq MAC commands with frequency=0
RDTP-9034
[WirelessLogger]- when downlink is sent over PingSlot, “G1” subband is displayed.
RDTP-9309
LRC stops forwarding uplink frames to AS when /home/actility is full
RDTP-7743
Error 500 at login with SMP Behind CDN accessed with IPv6
    • Related Articles

    • Product Version 5.2 - Software Release Notes

            Versions Version  Date  Author Details  Rev1  2018-11-15  Actility  First Version  Rev2  2018-11-21  Actility  Addition of the following features: RDTP-7722, RDTP-7723 and RDTP-7746  Rev3  2019-03-04  Actility  Updated versioning (release 5.2 ...
    • Product Version 5.1 - Software Release Notes

      Scope The scope of this document is to describe the release notes of the ThingPark Wireless Product Software v5.1. Version 5.1 subcomponents versioning Version 5.1 list of new introduced features (User Stories) and release notes for all ThingPark ...
    • Product Version 4.2 - Software Release Notes

      Scope The scope of this document is to describe the release notes of the ThingPark Product Software v4.2. Version 4.2 subcomponents versioning Version 4.2 list of new introduced features (NFR’s) and release notes for all ThingPark software components ...
    • Product Version 4.3 - Software Release Notes

      Scope The scope of this document is to describe the release notes of the ThingPark Product Software v4.3. Version 4.3 subcomponents versioning Version 4.3 list of new introduced features (NFR’s) and release notes for all ThingPark software components ...
    • Product Version 5.0.1 - Software Release Notes

      Scope The scope of this document is to describe the release notes of the ThingPark Product Software v5.0.1. Version 5.0.1 subcomponents versioning Version 5.0.1 list of new introduced features (User Stories) and release notes for all ThingPark ...