Version | Date | Author | Details |
---|---|---|---|
Rev1 | 2018-11-15 | Actility | First Version |
Rev2 | 2018-11-21 | Actility | Addition of the following features: RDTP-7722, RDTP-7723 and RDTP-7746 |
Rev3 | 2019-03-04 | Actility | Updated versioning (release 5.2 is renamed 5.2.2) - Addition of the following features: RDTP-9073, RDTP-8739, RDTP-5250, RDTP-7582 and RDTP-7849 - Removal of RDTP-3247/2201 (DC-OS being in restriction in release 5.2.2) |
Rev4 | 2019-03-12 | Actility | Minor editorial changes |
Rev5 | 2019-03-13 | Actility | Update LRC version following Gate-3 |
Rev6 | 2019-03-18 | Actility | Addition of RDTP-8628 and RDTP-6974 in section 6 (Issues Resolved for Customer Project / Customer Support) |
Rev7 | 2019-04-02 | Actility | - Update for RDTP-4515 Feature Activation - Remove RDTP-4762 from Section-5 (the feature is now fully validated by Actility) |
Rev8 | 2019-07-18 | Actility | Updates to section 4.5.8 (RDTP-4544): - Occurrence count of state-driven alarms is always 1. - Addition of a limitation regarding availability of historical alarms in the current release. |
Rev9 | 2020-04-10 | Actility | Added restriction to RDTP-2413 |
Rev10 | 2020-05-15 | Actility | Updated sub-component versioning to add “deprecated” components + tool-nfr330 |
Documents | Author | |
---|---|---|
01 | ThingPark Version 5.2.2 Integration Notes | Actility |
02 | LoRaWAN Backend Interfaces v1.0 | LoRa Alliance |
03 | ThingPark Wireless Device Manager User Guide | Actility |
04 | ThingPark Wireless Network Manager User Guide | Actility |
The scope of this document is to describe the release notes of the ThingPark Wireless Product Software release 5.2.2.
Version 5.2.2 subcomponents versioning
Version 5.2.2 list of new introduced features (User Stories) and release notes for all ThingPark software components
Issues Resolved for Customer Project/Customer Support
Additional Technical Details, Open and Resolved Issues in Version 5.2.2
The ThingPark set consists of four main key components:
ThingPark Wireless – Core network and OSS
ThingPark OS
ThingPark X
ThingPark Enterprise
The ThingPark platform is a modular solution enabling Network Operators to:
Deploy LPWANs based on LoRaWAN™ or LTE with ThingPark Wireless .
Manage, activate and monetize IoT bundles (device, connectivity and application) with ThingPark OS .
Provide value-added data layer services, such as protocol drivers and storage with ThingPark X .
ThingPark OSS acts as the central System Management Platform (SMP), enabling all other ThingPark platform modules with base capabilities such as subscriber management, centralized authentication and access rights, and workflow management.
ThingPark Enterprise is an Internet of Things (IoT) platform that manages private LoRa® Networks. The ThingPark Enterprise edition is used by companies to support their specific business.
Figure 1 - ThingPark Solution Architecture Description High Level Product Illustration
MODULE |
NAME |
DESCRIPTION |
VERSION |
---|---|---|---|
BILLING |
billing |
Online Billing |
3.4.1 |
LOCALES |
smp-auth-gui-locales |
Localization |
2.0.1-1 |
LOCALES |
smp-billing-locales |
Localization |
3.4.1 |
LOCALES |
smp-dashboard-kpi-gui-locales |
Localization |
2.0.5-1 |
LOCALES |
smp-locales |
Localization |
10.2.6 |
LOCALES |
smp-portal-locales |
Localization |
6.2.4 |
LOCALES |
smp-thingparkstore-locales |
Localization |
4.0.8 |
LOCALES |
smp-tpe-gui-locales |
Localization |
5.10.2-1 |
LOCALES |
smp-twa-admin-locales |
Localization |
9.10.3 |
LOCALES |
smp-twa-locales |
Localization |
10.14.4 |
LOCALES |
smp-wlogger-locales |
Localization |
7.2.6 |
PORTAL |
portal |
User Portal |
6.2.4 |
SMP |
smp |
System Management |
10.2.6 |
SMP |
smp-drivers |
SMP drivers (bpmn workflows) |
10.2.6 |
SMP |
smp-drivers-tools |
SMP drivers (bpmn workflows) |
1.2.16 |
SMP-CRONS |
smp-crons |
Batch scripts on SMP executed by the crond |
5.0.0 |
STORE |
store-prestashop |
Online Store Prestashop Module |
4.0.1 |
STORE |
store-thingparkstore |
Online Store |
4.0.16 |
STORE |
store-tpspayment |
Online Store Payment Module |
4.0.4 |
MODULE |
NAME |
DESCRIPTION |
VERSION |
---|---|---|---|
CATALOGS |
base-station-profiles |
Base Stations Catalog |
2.1.1 |
CATALOGS |
device-profiles |
Devices Catalog |
4.9.0 |
CATALOGS |
rf-regions |
RF Regions Catalog |
1.0.5 |
DCOS |
kpi-base-image |
Base image embedding entrypoint, submit, scheduling scripts for kpis |
1.3 |
DCOS |
mongos |
Docker image to instantiate mongos container on DCOS cluster |
0.5 |
DCOS |
spark-executor-image |
Base image used to run spark executors with sending graphite metrics |
1.0 |
DCOS |
sqlproxy |
Docker image to instantiate maxscale container on DCOS cluster |
0.5 |
HSM |
hsm |
HSM |
1.2.1.2 |
KK_BROKER |
acy-kafka |
Actility kafka server package |
1.1.1 |
KK_BROKER |
operator-records-routing |
Kafka processor |
2.4.0 |
LRC |
lrc |
LoRa Network Server (including libraries) |
1.14.17 |
LRC |
lrrfwfetch |
LRR firmware upload on the LRC server |
1.0.5 |
LRC-PROVISIONING |
lrc-binding-http |
LRC http provisioning interface |
2.0.2 |
LRC-SCHEMA |
lrc-schema |
XSD schema for RF Region Validation |
1.0.2 |
LRR |
lrr |
Base Station |
2.4.15 |
NETWORK-SURVEY |
nssa-network-survey |
Network Survey |
1.11.0 |
RCA |
rca-provisioning |
RCA provisioning tool |
1.3.3 |
RFTOOL |
rfregtool |
Generation tool of the packing of the RF region file on the LRC to prepare the provisioning of the LRR |
1.1.8 |
SHELLINABOX |
shellinabox-proxy |
Proxy and security for shellinabox |
2.0.1 |
SLRC |
key-installer |
Actility key-installer |
2.0.0 |
SOLVER |
locsolver |
Location solver |
1.2.20 |
SPARK |
kpi-sample |
Custom kpi samples |
2.2.3 |
SPARK |
twa-kpi-spark-operator |
Docker image including all operator kpi applications |
3.0.1-1.3 |
SPECTRUM-ANALYSIS |
nssa-spectrum-analysis |
Spectrum Analysis |
2.3.0 |
SPARK |
acy-azkaban-customkpi-scheduling |
Custom Kpi Scheduling Configuration |
DEPRECATED |
SPARK |
acy-azkaban-kpi-scheduling |
Predefined Kpi Scheduling Configuration |
DEPRECATED |
SPARK |
acy-azkaban-solo-server |
Azkaban Solo Scheduling Server |
DEPRECATED |
SPARK |
spark-history-server |
Spark History Server |
DEPRECATED |
SPARK |
spark-master |
Spark Master |
DEPRECATED |
SPARK |
spark-worker |
Spark Worker |
DEPRECATED |
SPARK |
twa-spark-kpi-dependencies-v1 |
kpi application commons dependencies |
DEPRECATED |
SPARK |
twa-spark-kpi1 :: twa-spark-kpi10 |
Generate kpis from 1 to 10 and product to Kafka |
DEPRECATED |
TPO_CONF |
tpo-conf-ansible |
Server configuration automation |
0.2.0 |
TWA |
twa |
Wireless OSS (Network Manager / Device Manager) |
10.14.4 |
TWA |
twa-dev |
Consumption of OSS.DEV Kafka topic for MongoDB integration |
2.4.1 |
TWA |
twa-ran |
Consumption of OSS.LRR Kafka topic for MongoDB integration |
1.4.1 |
TWA-ADMIN |
wirelessAdmin |
Wireless OSS (Operator Manager / Connectivity Plan Manager) |
9.10.3 |
TWA-CRONS |
twa-crons |
Batch scripts on TWA executed by the crond |
6.2.1 |
TWA-DASHBOARD |
twa-dashboard-ws |
Dashboard KPI server |
2.2.4 |
TWA-SUPPLY-CHAIN |
twasupplier |
Supply Chain (Application sample) |
7.0.1 |
TWA-SUPPLY-CHAIN-CRONS |
twasupplier-crons |
Batch scripts on Supply Chain executed by the crond |
1.0.0 |
TWA_KPI |
twa-kpi-kafka |
Consumption of KPIs in Kafka and import to MongoDB |
2.4. |
WLOGGER |
wlogger |
Wireless Logger |
7.2.6 |
MODULE |
NAME |
DESCRIPTION |
VERSION |
---|---|---|---|
LL |
acyec |
Actility PHP module to decrypt php files |
1.0.0 |
ALL |
maxscale-tools |
Maxscale Tools (logrotate) |
1.0.1-1 |
ALL |
tpk-extra-tools |
Tool to manage configuration files |
1.4.4 |
ALL |
wildfly-9.0.2 |
Wildfly version 9.0.2 |
final12 |
AS-PROXY |
nginx-as-proxy-conf |
NGINX configuration package for AS-Proxy |
2.0.2 |
HTTP-PROXY |
nginx-http-proxy-conf |
NGINX configuration package for HTTP-proxy |
8.0.5 |
MONGO |
mongo-tools |
MongoDB Tools Actility (logrotate) |
1.6.0 |
MONGO |
twa-kpi-mongo |
TWA KPI mongo scripts for mongo database |
3.0.1 |
MONGO |
twa-mongo |
TWA mongo scripts for mongo database |
10.14.4 |
SQL |
acy-liquibase |
Actility SQL Upgrade engine based on LiquiBase |
1.0.2 |
SQL |
mysql-tools |
Mysql Tools (script saveDatabase, restoreDatabase, logrotate) |
1.2.2 |
SQL |
smp-sql |
SMP SQL scripts for SQL database |
10.2.6 |
SQL |
twa-sql |
TWA SQL scripts for SQL database |
10.14.4 |
TOOLS |
tool-module-initialization |
Initialize module like network Manager, Application manager |
1.2.0 |
WEBAPP |
smp-auth-gui |
SMP AUthentication GUI |
2.0.1-1 |
WEBAPP |
twa-dashboard-kpi-gui |
Dashboard KPI GUI |
2.0.5-1 |
TOOLS |
tool-nfr330 |
Activation tool for the Address Manager module |
1.0.1 |
The LoRaWAN Backend Interfaces specification v1.0 specifies that the Join Server may either support the synchronous or the asynchronous mode when communicating with the Network Server.
Asynchronous mode means that the JS response is sent in a separate HTTP POST message, as illustrated by Figure 18 of [2]:
Synchronous mode means that the JS response is sent within the HTTP response message (with the 200 OK response), as per Figure 19 of [2]:
Before release 5.2.2, only asynchronous mode was supported by ThingPark Network Server, meaning that it can only interface with Join Servers implementing asynchronous mode (but not with JS implementing synchronous mode).
In release 5.2.2, ThingPark Network Server supports both modes so that it can perfectly interoperate with all the join server implementations.
NOTE: ThingPark Join Server uses asynchronous mode.
This feature is only relevant for the following scenarios:
Home activation of OTA devices when the Join-Server function is not combined into the LRC (for instance, handled by an external third-party server).
Activation away from home of OTA roaming devices, when the fNS communicates with an external JS to retrieve the Home NetID of the roaming device (via HomeNsRequest/Ans messages).
Thanks to this feature, ThingPark Network Server becomes fully compliant with the LoRaWAN backend interface specification and can interoperate with any third-party Join Server.
To support the synchronous mode, a new attribute \ is added to \ (for sNS/JS interface) and \ (for fNS/JS interface) to give the HTTP pattern supported by the JS:
Sample :
<ForeignOperator>
<AppEUI>A34A34A341234567</AppEUI> // AppEUI is the key (for OTAA activation)
<NetID>A34A34</NetID> <Transport>1</Transport>
<RoutingProfileJS>js-ope1</RoutingProfileJS>
</ForeignOperator>
<RoamingOperator>
<AppEUI>A34A34A341234567</AppEUI> // AppEUI is the key (for OTAA activation)
<NetID>A34A34</NetID>
<Transport>1</Transport>
<RoutingProfileJS>js-ope1</RoutingProfileJS>
</RoamingOperator>
Feature Limitations
Not applicable.
Prior to release 5.2.2, during the provisioning process of OTA devices in the Device Manager or Key Manager applications, the device’s root keys - that is to say, AppKey and NwkKey (this latter is valid only for LoRaWAN1.1 devices) – are provisioned in clear text into TWA. Even if these root keys are immediately encrypted by the HSM’s master key (in HSM mode) when they are provisioned by the user, clear-text provisioning into TWA could be deemed as a security weakness.
In release 5.2.2, the subscriber can provision his devices in a fully-secured way by relying on asymmetric cryptography as per the following steps:
The subscriber exports an RSA Public Key and uses it to encrypt the device’s root keys.
Device keys are provisioned in encrypted format and decrypted using the RSA Private Key by the HSM (or by TWA in non-HSM modes)
The following diagram illustrates the provisioning mode supported in HSM mode.
The user exports the HSM RSA public key (HEK[pub]) and uses it to encrypt AppKey and NwkKey.
The user creates the device by providing the RSA-encrypted AppKey and NwkKey
The HSM re-encrypts AppKey and NwkKey using its master keys (HAK/HNK)
HSM-encrypted AppKey and NwkKey are provisioned in LRC
On Join Request, the LRC provides HSM-encrypted AppKey and NwkKey to the HSM for Join procedure execution.
Benefit: ThingPark never accesses AppKey and NwkKey in clear text fully secured by HSM.
The subscriber shall click “Downlink RSA Public Key” button, then encrypt the root keys using the RSA public key (PKCS#1 v1.5 padding):
Where:
nwkKey.bin is the NwkKey in binary format
appKey.bin is the AppKey in binary format
HEK1.pem is the RSA Public Key in X.509 SubjectPublicKeyInfo/OpenSSL PEM format
encryptedNwkKey.bin is the RSA encrypted NwkKey in binary format
encryptedAppKey.bin is the RSA encrypted AppKey in binary format
Then, the subscriber uploads the binary files of RSA encrypted AppKet (and NwkKey if the device is LoRaWAN1.1).
The same approach is also applicable to device provisioning in the Key Manager (for standalone JS mode).
While this feature is essentially interesting for HSM mode, it is also extended to non-HSM modes:
SSM (Software Security Mode): In this mode, TWA replaces the HSM in key encryption process. Hence, the generation of the public RSA key is done by TWA as per the following diagram. Note that SSM is only supported for splitted NS-JS architecture (also known as standalone JS mode).
Legacy mode (also known as LRC combo-mode, where the join server function is embedded in the LRC) without HSM.
The user exports the TWA RSA public key (TWK[pub]) and uses it to encrypt AppKey and NwkKey.
The user creates the device by providing the RSA-encrypted AppKey and NwkKey. TWA re-encrypts AppKey and NwkKey using LRC Cluster AES Key (LCK) and provisions them in the LRC.
Benefit: AppKey and NwkKey are not entered in clear text in GUI.
For non-HSM modes, the user has the choice to provision root keys in clear-text or secure mode (both options are supported by GUI and API).
The same approach is also applicable to device provisioning in the Key Manager (for standalone JS mode).
The CSV file used for device import is enhanced for LoRaWAN OTAA device:
Column | Definition | Cardinality |
---|---|---|
F | LoRaWAN AppKey Supported encoding modes:
| Mandatory if External Join Server option is DEACTIVATED, Forbidden otherwise. |
U | LoRaWAN NwkKey Supported encoding modes:
| Mandatory for LoRaWAN 1.1 device if External Join Server option is DEACTIVATED, Forbidden otherwise. |
X | HSM Exchange Key (RSA) version or TWA Exchange Key (RSA) version used to encrypt AppKey/NwkKey | Optional if External Join Server option is DEACTIVATED, Forbidden otherwise. |
In Operator Manager GUI, the Operator can provision the HSM Exchange Key (RSA) version under “HSM Groups” panel, as illustrated by the following figure:
Feature Activation
Feature Limitations
Not applicable.
The aim of this feature is to enrich the security options proposed by ThingPark for the LRR-LRC backhaul interface.
Before release 5.2.2: only IPSec tunneling is supported for LRR backhaul (based on Strongswan).
Starting release 5.2.2: TLS is proposed as another VPN tunneling mode besides IPSec.
The following diagrams illustrates the LRR flows in TLS mode:
For IEC-104 flow and connection to Key Installer:
IEC-104 messages are transported inside the TLS tunnel to HAProxy, then routed by HAProxy to LRC.
X.509 certificates are configured the same way as IPSec, i.e. generated by TWA/RCA and pushed to the Key Installer where they are downloaded by the LRR.
For other flows (SFTPC and SSH):
SFTP is tunneled over TLS, ended on the HAProxy, and is routed to the LRC or SUPPORT server.
SH remains out of the tunnel (like IPSec).
Thanks to this feature, the Operator / Network Partner has more choices (IPSec or TLS) to implement security of LRR backhaul interface towards the core network.
It is difficult to say which one of IPSec or TLS is better, this depends on the Operator choice. The main advantage of IPSec over TLS is its ability to secure UDP traffic, however, this point is not relevant for ThingPark since LRR-LRC communication is always based on TCP.
This feature is deactivated by default.
To activate TLS on LRR side, a specific flag must be configured in lrr.ini:
[services] tls=1
NOTE: HAProxy is a TLS pre-requisite, so it should be deployed on SLRC before activating TLS on the LRR side.
Not applicable.
The scope of this feature is to improve the security level of the base station access to the Key Installer. Key Installer is a ThingPark component responsible of distributing the VPN certification tarballs to base stations.
Before release 5.2.2 , the access to the Key Installer is secured by IP filtering/whitelisting. This approach has some drawbacks:
It requires that all public IP addresses of all the gateways are known to the network Operator to build the IP whitelist, which presents an operational overhead.
Additionally, the IP address of base stations using cellular backhaul as primary interface cannot be directly filtered due to NAT protocol.
After successful authentication, the base station is granted secure access to its personalized tarball; each base station can only retrieve its own tarball (i.e. cannot download tarballs of other base stations). Hence, even if the security of one base station is compromised due to theft (for instance), the security of other base stations is not compromised.
NOTE: This feature is only compatible with LRR-UUID identification mode.
To use this feature, the following steps are required:
For each base station, login to SUPLOG (LRR local administration GUI) to retrieve the public key of the base station. NOTE: The base station image should be configured to use the Public Key Authentication mode.
In Network Manager application, provision this public key retrieved from SUPLOG. This can be either done at base station creation or updated for an existing base station, it is only available if BS security (TWA parameter “defaultBsSecurity”) is configured to “IPsec or TLS (X.509 certificates)”.
Public key provisioning during base station creation:
“Disable public key authentication” flag is displayed if the operator configuration allows this mode as an option. If this checkbox is deactivated, the legacy mode (based on IP filtering and shared passwords) remains usable.
Public key provisioning for existing base station:
In base station dashboard, click “Manage Public Key” button, then fill the Public Key.
Thanks to this feature, the base station access to the Key Installer becomes more secure:
Access control is performed by Public Key Authentication mechanism instead of IP address filtering operational gain.
Once authenticated, the base station can only retrieve its own certificate.
ENFORCED: Public key authentication is mandatory, a public key is required to create a base station.
OPTIONAL_ENABLED_BY_DEFAULT: Public key authentication is mandatory by default, but user can switch back to IP whitelist authentication to create a base station without providing a public key.
OPTIONAL_DISABLED_BY_DEFAULT: IP whitelist authentication is the default, but user can switch to public key authentication and provide a public key.
DISABLED (default mode): Public key authentication is disabled. A public key is never requested on base station creation (behavior prior to release 5.2.2).
Additionally, the base station image must be configured to use public key authentication to enable this feature.
The following limitations apply to this feature:
Base station’s public key is only available through SUPLOG and must be provided on base station creation. Any base station configured to use public key authentication through SUPLOG without setting its public key in TWA Network Manager will fail to connect to the Key Installer.
When an operator is configured to use both public key (new mode described by this feature) and shared password authentication (legacy method) for key installer, all operator’s tarballs are accessible using shared password, including tarballs of base stations using public key authentication.
To use public key authentication on existing base stations, they must be flashed with an image configuring this feature; this is required to have the latest version of checkvpn script compatible with this feature.
This feature is only compatible with base stations using LRR-UUID identification mode.
This feature brings several scalability improvements to ThingPark Wireless architecture:
MongoDB scalability:
- A “shard” is a single mongod instance or replica set that stores some portion of a sharded cluster’s total data set.
- “Mongos” acts as a query router, providing an interface between client applications and the sharded cluster.
- “Config Servers” store metadata and configuration settings for the cluster.
The shard key (the field(s) MongoDB uses to distribute documents across members of a sharded cluster) depends on the collection type: for Base station reports, the LRR-ID is used as shard key together with the Network Partner ID whereas the DevEUI is used as shard key for device reports (together with the subscriber-ID).
For details information about MongoDB sharding, please refer to https://docs.mongodb.com/manual/sharding/ .
TWA (back-office applications) scalability: Enhancement of horizontal scalability by splitting the TWA architecture into several sub-components to allow independent scaling of each component and support the consumption of LRC reports via Kafka.
The following diagram shows the different components of TWAv2 architecture:
NOTE: For ThingPark Wireless, LRC reporting to OSS over HTTP is DEPRECATED and replaced by TWA-DEV and TWA-RAN functions.
- TWA-WS corresponds to the legacy TWA architecture, it implements wireless OSS API function (via webservices) and LRC provisioning function.
- TWA-DEV processes OSS device-based reports by consuming OSS.DEV.v1 Kafka topic, it also inserts/retrieves/updates device documents in MongoDB and triggers the appropriate device alarms.
- TWA-DEV scales with the number of OSS.DEV.v1 topic partitions: assuming the default number of partitions (50), a maximum of 50 active TWA-DEV instances can be deployed. Each TWA-DEV instance manages a dedicated pool of devices (the Kafka topic partitions use DevEUI as key).
- TWA-RAN processes OSS base station reports by consuming OSS.RAN.v1 Kafka
- topic, it also inserts/retrieves/updates base station documents in MongoDB and triggers the appropriate base station alarms.
- TWA-RAN scales with the number of OSS.RAN.v1 topic partitions: assuming the default number of partitions (50), a maximum of 50 active TWA-RAN instances can be deployed. Each TWA-RAN instance manages a dedicated pool of base stations (the Kafka topic partitions use LRR-ID as key).
- TWA-SPARK and TWA-KPI are related to KPI-dashboard framework; there are already supported since TPW release 4.3.
LRC (core network) scalability:
This feature brings substantial scalability improvements to the ThingPark back-office architecture. It allows a fully horizontally-scalable architecture for both TWA and Mongo databases.
The split of TWA architecture into several sub-components allows independent scaling of each component according to traffic growth so that additional instances are only deployed where needed. For instance, if the bottleneck comes from device processing, only TWA-DEV instances could be multiplied without scaling-up TWA-RAN.
The use of Kafka for LRC-TWA interface is crucial to improve end-to-end scalability, by removing the HTTP protocol. It also increases the reliability of this interface thanks to the use of a reliable message queue.
The following figure shows the migration procedure to activate TWAv2 and MongoDB sharding once the network is migrated to release 5.2.2:
HTTP interface toward TWA is deactivated on LRC
Timestamp (TS) of latest inserted MongoDB report is retrieved. The following command allows retrieving timestamp TS of the latest DeviceHistory document:
MongoDB sharding is activated, by following these steps:
MG_CONF replica set is deployed
mongos is deployed on AS_TWA instances
Data migration is performed
New indexes are created
Sharding is activated on all collections
TWA-DEV / TWA-RAN are started
TWA-DEV is configured to only process reports > TS – 1h
TWA-RAN is configured to only process reports > TS – 1h
NOTE: TWA-DEV / TWA-RAN implement a specific behavior during the migration process:
- TS: a configuration parameter corresponding to the timestamp of the last report already processed
- Kafka topic consumption:
- If the record timestamp \<= TS: the record is ignored
- Else: the regular processing applies
NOTE: One hour is a compromise to:
Avoid losing too many late LRC reports
Limit the amount of LRC reports processed twice
The implementation of this feature has the following limitations:
For networks migrating directly from release 5.0.1 to release 5.2.2, MongoDB sharding should be delayed by 35 days after the 5.2.2 migration to avoid losing any data history once sharding is activated.
This double processing would impact the CRC-error stats, some BS or device alarms may be also triggered twice.
NOTE: Double processing of LRC reports does not impact UDR generation: network traffic is inserted only once in MongoDB.
This feature allows ThingPark core network to support up to 50 000 base stations within the same LRC cluster.
Until release 5.1, the maximum number of base stations that can be simultaneously connected to the LRC is 30 000 base stations.
Starting release 5.2.2, this limit is pushed to 50 000 base stations to ease the large-scale deployment of indoor pico/nano gateways that are typically installed on end-customer premises.
Thanks to this feature, ThingPark Wireless becomes ready for massive deployment of indoor pico/nano gateways; facilitating the heterogenous network deployment model in which the macro layer of outdoor gateways serves as an umbrella for the underlying small cell (especially indoor) layer of pico/nano gateways.
No applicable.
Not applicable.
The following configuration enhancements are supported in release 5.2.2:
This timer is used by the LRC to determine the maximum waiting time between 2 successive multicast transmission attempts: for instance, if the “Downlink maximum transmissions” is set to 3 in the multicast connectivity plan, the LRC will attempt a first transmission towards each base station (BS) included in the multicast scope and start a timer to wait for the downlink_sent_indication back from each BS. Once this timer expires without a positive response from one BS, the LRC will send the second attempt and so on unless the maximum number of attempts is reached.
Note that this timer should always be set higher than (or at least equal to) the maximum LRR retry period, which is 60 seconds for class C devices and maximum 256 seconds for Class B. The LRR retry period is defined as the maximum period during which the LRR autonomously retries the transmission of a downlink class B/C frame before eventually giving-up (i.e. sending back a failure indication to the LRC).
Before release 5.2.2: this LRC waiting timer was set in the connectivity plan. However, there was no guarantee that this LRC setting will be aligned with the LRR’s local retry timer. If the LRC timer is set lower than the LRR timer, then the same downlink frame might be needlessly sent several times by the same base station, hence wasting radio resources and increasing pollution over the radio interface.
Starting release 5.2.2: The LRC waiting timer is removed from the multicast connectivity plan and converted to two systemwide parameters in lrc.ini:
For class B multicast groups: “MulticastRetransmitTimerClassB”, default value is 256 seconds (corresponding to 2 full beacon periods, aligning with the largest LRR retry timer value).
For class C multicast groups: “MulticastRetransmitTimerClassC”, default value is 60 seconds (aligned with the default LRR retry timer).
Benefit: Configuration simplification, avoid any conflict between LRC timers and LRR’s local timer, optimization of downlink resources over the air.
RDTP-2215: This feature provides the multicast application server with additional stats regarding the distribution of the LRR delivery failure causes in the final multicast summary reports.
Before release 5.2.2: The multicast summary reports sent by the LRC to AS and ThingPark OSS indicate the overall success rate but don’t indicate the breakdown of transmission failures by cause.
Starting release 5.2.2: The distribution of delivery failure causes is added to the final multicast summary report.
List of LRC-computed failure causes applicable to multicast:
DA: “Duty cycle constraint detected by LRC”
DB: “Max dwell time constraint detected by the LRC”
DC: “No GPS-synchronized LRR detected by the LRC”
DD: “No LRR connected detected by the LRC”
List of LRR computed failure causes applicable to multicast:
A0: “Radio stopped”
A1: “Downlink radio stopped”
A2: “Ping slot not available”
A3: “Radio busy”
A4: “Listen before talk”
B0: “Too late for ping slot”
D0: “Duty cycle constraint detected by LRR”
E0: “Max delay for Class C” (60 seconds)
F0: “No matching channel for multicast frequency”
On the DevEUI_multicast_summary document, the distribution of delivery failure causes is formatted as a comma-separated list of cause/number of occurrences; example: A3=2, D0=7, DC=3…
RDTP-5105: This feature allows configuring the multicast frequency and data rate directly within the multicast group without necessarily using the RF Region configuration.
Before release 5.2.2: The multicast frequency and data rate are inherited from the RF Region configuration (except for the RX2 data rate that could be forced in the multicast connectivity plan).
Starting release 5.2.2: For multicast mode implying session configuration by application-layer messages (as per the LoRaWAN multicast data-block specification), the multicast MAC transmission parameters (i.e. data rate and frequency) don’t necessarily follow the unicast settings.
Benefit: Additional flexibility to optimize the multicast configuration independently from unicast, which is particularly relevant for multicast configurations setup by the application layer and especially Actility’s Reliable Multicast (RMC) Server.
The bunch of multicast enhancements brought by ThingPark Wireless release 5.2.2 targets the following benefits to TPW customers:
Configuration simplification and flexibility
Optimization of the downlink resources by avoiding transmitting the same frame twice by the same gateway
Enriched delivery summary of downlink failure causes to help application servers adapt their multicast transmission pace accordingly
Ability to configure the multicast transmission parameters (data rate and frequency) differently from unicast mode, by configuring these parameters directly in the multicast group. This enhancement implies that the multicast session is setup by the application layer to directly instruct the unicast device which parameters are used by the multicast server.
RDTP-4265 (handling of multicast retransmission timer) is activated by default in release 5.2.2, it cannot be deactivated since the Connectivity Plan parameter is deprecated in this release.
RDTP-2215 (enhancement of the multicast summary report) is also activated by default in release 5.2.2.
RDTP-5105 (configuration of multicast frequency/data rate at multicast group level) is deactivated by default, i.e. the related fields are left empty after migration to release 5.2.2 or when creating a new multicast group and hence the RF Region parameters are still used unless the frequency and/or data rate are already forced in the multicast device table of the LRC (also known as table A).
To configure the multicast frequency and/or data rate at multicast group level, the corresponding fields should be filled as per the following screen shots for class B and class C device:
If either field is left empty in the multicast group configuration, the value is taken from the RF Region configuration unless it is already forced in the multicast device table of the LRC (also known as table A).
The multicast frequency set in the multicast group must match a Logical Channel (LC) defined in the RF Region configuration, either in the RxChannels or TxChannels section (RX2Freq is out of scope). If this condition is not fulfilled, the LRC shall return an error (F0: “No matching channel for multicast frequency”) for every transmission related to this multicast group.
The scope of this feature is to support to ability to define a whitelist or a blacklist of Network Partners (also known as network providers) in the Connectivity Plan, to control which gateways (accordingly to their Network Partner ID) can relay LoRaWAN traffic of the devices associated with this connectivity Plan.
The following diagram illustrates the case when the LRC filters uplink frames based on the Network Partner ID of the receiving base station, using a whitelist defined in the Connectivity Plan.
Whitelisting: When the LRC receives an uplink frame, it verifies the Network Partner ID (NP-ID) for each gateway relaying this frame and processes the uplink frames received only by those whose NP-ID is included in the whitelist defined in the connectivity plan (uplink frames relayed by other base stations are ignored by the LRC, i.e. not sent to AS/OSS; and those gateways are not eligible for downlink transmission).
Blacklisting: When the LRC receives an uplink frame, it verifies the Network Partner ID (NP-ID) for each gateway relaying this frame and processes frames reported by all gateways except those whose NP-ID is included in the blacklist defined in the connectivity plan.
This feature allows ThingPark Operators and Connectivity Suppliers to fully leverage the TPW Network Partner/Provider role to split the Operator’s LoRaWAN network into several sub-networks while keeping the same NetID for all the sub-networks and fully controlling whether devices are authorized to roam between the different sub-networks of the Operator.
Examples of sub-networks within the Operator’s network are:
This feature is deactivated by default after the migration to TPW release 5.2.2, since the field “Network Partner access control” is set “No access control” by default in the Connectivity Plan:
To activate this feature, the user should either define a whitelist or blacklist of Network Partner IDs.
This feature does not apply to multicast.
Moreover, the maximum number of Network Partners that can be added to the whitelist/blacklist is limited to 50.
This feature allows the Operator to filter unwanted LoRa traffic directly by the LRR.
Before release 5.2.2: The LRR forwards all the uplink frames having a valid physical CRC to the LRC, even those coming from devices not provisioned on the Operator’s network nor any of its roaming partners. This approach is sub-optimal from backhaul consumption standpoint especially for cellular (3G/LTE) or satellite backhaul.
Starting release 5.2.2: The Operator can configure a list of blacklisted/whitelisted DevAddr prefixes, representing respectively the banned/authorized Network IDs (NwkID, included in the DevAddr sent in every uplink frame).
Using blacklisting, the Operator can filter-out the NwkID of another LoRaWAN network deployed within the same coverage area, to avoid wasting its backhaul resources routing the competitor’s traffic if there is no roaming agreement between both Operators.
Using whitelisting, the Operator shall configure its NwkID as well as those of all its roaming partners; so that the LRR only routes the uplink frames having these authorized NwkIDs.
The NwkID white/black list is configured at the LRC (in the Operator table, see “Feature Activation” section for more details) and synchronized with the LRR via specific IEC-104 message. The NwkID refresh rate is configurable in the LRR (lrr.ini file).
The minimum LRR version required for this feature is 2.4.11.
This feature allows the network Operators and Network Providers to optimize their backhaul consumption by avoiding the routing of uplink frames that will be later-on rejected by the LRC.
This optimization is essentially relevant for cellular and satellite backhaul modes.
This feature is deactivated by default.
NOTE: NetworkFilterVersion = 0 is reserved to instruct the LRR to delete all the NwkID filters currently loaded on the LRR, hence deactivating this feature after it has been used.
(...) <NetworkFilter>-00/7,-02/7,+04/7</NetworkFilter> <NetworkFilterVersion>1</ NetworkFilterVersion> (...)
Example2: In the following example, an UL frame with a NwkID 00000010 will be accepted, but an NwKID 00000011 will be dropped.
(...) <NetworkFilter>-03/8 ,+02/7</NetworkFilter> <NetworkFilterVersion>2</ NetworkFilterVersion> (...)
For SaaS customers not having access to the LRC, please contact Actility Support team to configure the NwkID list.
The implementation of this feature in release 5.2.2 has the following limitations:
The configuration of the NwkID whitelist/blacklist is configured directly in the LRC. Provisioning of this list by TWA is not yet supported
Since the filtering is based on NwkID (included in the DevAddr field of the LoRaWAN uplink frames), the Join Request frames (for OTA devices) are not filtered in the LRR because the DevAddr is not yet allocated when the device sends its Join Request.
The number of uplink frames dropped by the LRR due to the NwkID filtering during the last aggregation period (5 minutes by default) is reported to TWA in the RFcell report but this stat is not yet available on TWA side.
The maximum number of NkwIDs to be white/blacklist per Operator is limited to 1000.
Disabling NwkID filtering by setting NetworkFilterVersion = 0 is currently in restriction. This limitation will be lifted by RDTP-14077.
This feature supports IPv6 addressing, in addition to the existing IPv4, for all LRR-SLRC interface. Only LRR IP flows are targeted by this feature, LRC\<->AS interface is out of scope of this feature.
IPv6 addresses are 128-bit IP address written in hexadecimal and separated by colons, for instance: 3ffe:1900:4545:3:201:f8ff:fe21:66cf.
The following diagram illustrates the IP flows when the LRR uses IPv6:
Although the LRC is ready to support IPv6, the current scope of this feature in release 5.2.2 is limited to the LRR\<-> SLRC interface; so the LRC remains on IPv4 in the current release.
This feature allows supporting IPv6 addressing, which has the following advantages:
Significant increase in the address pool compared to IPv4, no more private address collision
Easy administration, no need for DHCP (thanks to “IPv6 Stateless Address Auto configuration” feature)
Simpler header format
No more NAT (network address translation)
Cost: IPv6 is cheaper than IPv4 since IPv4 addresses are close to exhaustion.
[ipv6] addglobalscope=1 ; default 0 prefix=aaaa:: ; default aaaa::
This feature has the following limitations in release 5.2.2:
In ThingPark release 5.2.2, IPv6 has been tested only in TLS mode (not tested with IPSec).
The fallback mode from IPv6 to IPv4 is not yet supported.
The aim of this feature is to render the default RF parameters defined by the LoRaWAN Regional Parameters specification configurable in the LRC for each ISM band profile.
Before release 5.2.2 , the default settings for the following parameters (for each regional profile) is defined in the Device Profile boot settings.
MAC RX1 Delay (RX2 delay is always 1s more than RX1)
Join Delay for RX1/RX2
RX1 Data Rate Offset, RX2 Frequency, RX2 Data Rate
Beacon Frequency and Data Rate, PingSlot Frequency
Uplink / downlink dwell time (only for AS923 profile)
If these values are not filled in the Device Profile, the default settings defined by LoRaWAN were hardcoded in the LRC.
In release 5.2.2 , instead of using hardcoded values, the default settings are configured in a new LRC table called “IsmbandProfile” under /FDB_lora directory. An example of this table is given below for EU868 profile:
<
lora:IsmbandProfile
xmlns:lora
=
"
http://uri.actility.com/lora
"
>
<
lora:ID
>
eu868
</
lora:ID
>
<
lora:MACRxDelay1
>
1000
</
lora:MACRxDelay1
>
<
lora:MACJoinDelay1
>
5000
</
lora:MACJoinDelay1
>
<
lora:RX1DRoffset
>
0
</
lora:RX1DRoffset
>
<
lora:RX2DataRate
>
0
</
lora:RX2DataRate
>
<
lora:RX2Freq
>
869.525
</
lora:RX2Freq
>
<
lora:BeaconFrequency
>
869.525
</
lora:BeaconFrequency
>
<
lora:PingSlotChannelFrequency
>
869.525
</
lora:PingSlotChannelFrequency
>
<
lora:BeaconDataRate
>
3
</
lora:BeaconDataRate
>
<
lora:UplinkDwellTime
>
0
</
lora:UplinkDwellTime
>
<
lora:DownlinkDwellTime
>
0
</
lora:DownlinkDwellTime
>
</
lora:IsmbandProfile
>
The default settings for each ISM band are compliant with LoRaWAN Regional Parameters “LoRaWAN-Regional-Parameters-v1.1rA.docx”; they are presented by the following table:
eu868 | us915 | au915 | cn470 | as923 | kr920 | cn779 | eu433 | in866 | |
---|---|---|---|---|---|---|---|---|---|
MACRxDelay1 (msec) | 1000 | 1000 | 1000 | 1000 | 1000 | 1000 | 1000 | 1000 | 1000 |
MACJoinDelay1 (msec) | 5000 | 5000 | 5000 | 5000 | 5000 | 5000 | 5000 | 5000 | 5000 |
RX1DRoffset | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
RX2DataRate | 0 | 8 | 8 | 0 | 2 | 0 | 0 | 0 | 2 |
RX2Freq (MHz) | 869.525 | 923.3 | 923.3 | 505.3 | 923.2 | 921.9 | 786 | 434.665 | 866.55 |
BeaconFreq (MHz) | 869.525 | 0 (Freq. Hopping) | 0 (Freq. Hopping) | 0 (Freq. Hopping) | 923.4 | 923.1 | 785 | 434.665 | 866.55 |
PingSlotChannelFreq (MHz) | 869.525 | 0 (Freq. Hopping) | 0 (Freq. Hopping) | 0 (Freq. Hopping) | 923.4 | 923.1 | 785 | 434.665 | 866.55 |
BeaconDataRate | 3 (SF9) | 8 (SF12) | 10 (SF10) | 2 (SF10) | 3 (SF9) | 3 (SF9) | 3 (SF9) | 3 (SF9) | 4 (SF8) |
UplinkDwellTime | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |
DownlinkDwellTime | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |
When the device sends its first uplink frame, the LRC determines the ISM-band of the device from the RFregion of the best-LRR, then uses the appropriate ISM band profile to communicate with the device during the boot phase.
NOTE: the boot parameters indicated by the LoRaWAN specification are only valid for the boot phase (to establish the initial communication with the device); the network can still use different runtime settings (defined in RFregion configurable or in the Connectivity Plan) that are notified to the device via appropriate MAC commands.
This feature has the following benefits:
Ensure more visibility regarding profile-related boot settings without relying on hardcoded values, and ease troubleshooting of related issues.
Additionally, Device Profile creation could be simplified since there is no need any more to configure boot parameters in the Device Profile as long as they obey the default settings defined by LoRaWAN.
The Data rate and TxPower encoding tables, as well as the downlink RX1 data rate mapping and Max Payload size tables are still hardcoded by the LRC (for each ISM band) in release 5.2.2.
In release 5.2.2, the two following keys are renamed to avoid any confusion regarding how they are used:
“LRC-AS Key” is renamed “ Tunnel interface authentication key “: This key is optionally set in the Device Manager or Application Manager GUI, it is used to provide HTTP signature (via security tokens) to uplink/downlink frames exchanged between LRC and AS over the tunnel interface.
“AS Key” is renamed “ AS Transport Key ”: This key is used in HSM/SSM modes, it is either configured in the Device Manager (in case of LRC combo mode with HSM) or in the Key Manager (in case of splitted NS/JS mode, with either HSM or SSM security).
This feature avoids any confusion/misinterpretation of the application server security keys. The previous naming on ThingPark GUI was confusing, the new labeling names are meant to be more explicit.
Not applicable.
Not applicable.
The purpose of this feature is to allow TWA to validate the correctness of the syntax of the RF Region content when the file is provisioned (created or updated) from the Operator Manager.
Before release 5.2.2: the RF Region content was accepted by TWA without verifying that the XML attributes have the right syntax; which is prone to human errors/typing mistakes…
Starting release 5.2.2: the content of the RF Region is validated before it is provisioned on the LRC. In case of syntax error, TWA returns an error to warn the user.
This feature applies to the RF Regions imported from the catalog (either automatically from Actility REPO or manually using custom catalogs) as well as the RF Region created/edited from the Operator Manager application.
Thanks to the feature, the Operator can be warned if the syntax of the RF Region content is wrong. Hence, this feature brings operational gain to ThingPark Operators.
This feature is activated by default once ThingPark is upgraded to release 5.2.2.
Since Over The Air Activation (OTAA) has become the most common and widely used activation method, as of TPW release 5.2.2, the default Device provisioning method is set to OTAA.
The feature introduces a visually intuitive approach towards device provisioning in ThingPark Wireless’s Device Manager application by proposing the popular device method as a default one.
The feature is activated by default.
Not applicable.
The below tables summarize and lists the behavior prior to ThingPark release 5.2.2, and the enhancements introduced in release 5.2.2 as part of this feature:
Prior to release 5.2.2 | Starting release 5.2.2 |
---|---|
Each alarm has a severity state: Indeterminate, Warning, Minor, Major, Critical, Cleared. Alarms can be acknowledged (ACKed) by administrators / end users (no impact on severity state). An alarm not cleared and ACKed is automatically unacknowledged if the alarm severity increase. | No change. |
For each type of alarm, a Device or Base Station can be associated with: - 0 or 1 uncleared alarm - Up to N cleared alarms A cleared alarm stays in cleared state (expires after 35 days). A new alarm is created if the same alarm is triggered again. | For each type of alarm, a Device or Base Station can be associated with 0 or 1 alarm (cleared or not cleared). A cleared alarm is recycled if the same alarm is triggered again, hence a cleared alarm never expires. |
If an alarm is not cleared, the number of occurrences is incremented each time the alarm is triggered once again. | When an alarm cleared and ACKed is recycled, the number of occurrences is reinitialized. In all other cases, the number of occurrences is incremented each time the alarm is triggered once again (relevant only for even-driven alarms) |
The occurrence count of several alarms is not meaningful | Alarms are categorized into state-driven or event-driven alarms: - State-driven alarms have occurrence count = 1. - Event-driven alarms have occurrence count incremented every time the alarm is raised while it is still active. The table below shows the classification of device and BS alarms into state-driven or event-driven. |
Alarms are not deleted when the associated Device or a Base Station is deleted. | Alarms are deleted when the associated Device or a Base Station is deleted. |
Classification of device alarms:
Alarm number / name | Alarm classification |
---|---|
001: Battery level threshold | State-driven |
004: No uplink activity warning | State-driven |
005: Node uses higher data rate than expected | State-driven |
006: Node uses lower data rate than expected | State-driven |
012: MAC command transmission blocked because of rejection | State-driven |
013: MAC command transmission blocked because of no reply | State-driven |
002: Traffic exceeds the downlink regulator settings | Event-driven |
003: Traffic exceeds the uplink regulator settings | Event-driven |
007: Join request replay detected (DevNonce replay) | Event-driven |
008: Wrong MIC detected in Join request | Event-driven |
009: Join request replay detected (wrong MIC correlation) | Event-driven |
010: Uplink frame replay detected (wrong FCnt) | Event-driven |
011: Uplink frame replay detected (repeated FCnt) | Event-driven |
014: Invalid AppEUI detected in Join request | Event-driven |
015: Join request replay detected (invalid DevNonce) | Event-driven |
016: Wrong MIC detected in Uplink frame | Event-driven |
For detailed information about device alarms, please refer to [3] - ThingPark
Wireless Device Manager User Guide .
Classification of base station alarms:
Alarm number / name | Alarm classification |
---|---|
102: Base station connection status | State-driven |
103: Unusually low uplink traffic level | State-driven |
104: Unusually high level of invalid uplink physical CRC | State-driven |
105: Downlink frame rate exceeds the RF cell capacity | State-driven |
106: Unusually high CPU usage level | State-driven |
107: Unusually high RAM usage level | State-driven |
108: Unusually high file system usage level | State-driven |
109: Time synchronization lost | State-driven |
110: Power failure detected | State-driven |
111: Beacon transmission failure | State-driven |
112: Abnormal log activation | State-driven |
121: Backhaul network interface status | State-driven |
101: LRR software restarted by watchdog | Event-driven |
113: Join request replay detected (DevNonce replay) | Event-driven |
114: Wrong MIC detected in Join request | Event-driven |
115: Join request replay detected (wrong MIC correlation) | Event-driven |
116: Uplink frame replay detected (wrong FCnt) | Event-driven |
117: Wrong MIC detected in Uplink frame | Event-driven |
118: Uplink frame replay detected (repeated FCnt) | Event-driven |
119: Invalid AppEUI detected in Join request | Event-driven |
120: Join request replay detected (invalid DevNonce) | Event-driven |
For detailed information about base station alarms, please refer to [4] - ThingPark Wireless Network Manager User Guide .
The following diagram describes the alarm management process prior to release 5.2.2:
The following diagram describes the alarm management process starting release 5.2.2:
This feature improves ThingPark alarm management, by optimizing the alarm storage in the database and correcting implementation issues from previous releases.
These improvements reduce end-user’s alarm management complexity by:
fixing alarm counts
deleting alarms associated with deleted objects (base stations or devices)
presenting a more comprehensive view of number of occurrences of the same type of alarm.
This feature is activated by default.
In ThingPark Wireless release 5.2.2, the OSS API framework was introduced to the ThingPark users to design, build, document and consume REST APIs.
The online documentation is accessed by browsing the following URL https://oss-api.thingpark.com/
The following improvements have been introduced in ThingPark OS release 5.2.2:
Documents are produced automatically.
API error codes are available.
Out-of-the-box Swagger UI is supported in this release.
Internal improvements are introduced to use this framework during the development and integration phases.
Vendor API’s capacities got extended by attributing different requirement levels to schema properties based on REST operation; for instance, a schema property can be “required” in CREATE, “optional” in UPDATE and “not available” in READ.
Enable Swagger difference capabilities to handle composed schemas.
Openapi-components packaging.
Use packaged openapi-component.
Handle TWA and SMP error codes page.
Introduce a documentation builder - merge git repos.
Handle API version artifact publication.
OSS API site building - create a project that build OSS API archive from artifact of each version.
The OSS API framework enables a simplified and modern API user experience, mainly set for development, testing, consuming API Rest resources and accessing API documentation. By becoming a dynamic API framework, it continues the OSS API documentation stack that was available before ThingPark v5.2.2.
To access the OSS API framework, please browse the following URL: https://oss-api.thingpark.com/
The following feature limitations are considered and are product feature candidates for improvements in the next ThingPark releases:
Model order properties is set by an alphabetical order and not a functional one.
Enhancements and clarifications are required on the JSON date format (EPOC) vs the XML date format (ISO).
A convenient and optimized PDF format should be produced for each API method.
API examples may include non-real values.
API testing:
Manual API testing is not supported. The HTTP document does not allow to run manual testing even when the user owns valid credentials.
Automatic API testing, e.g. for regression and progression type of testing, is not supported. The HTTP document does not allow to run automatic testing even when the user owns valid credentials.
In order to clearly differentiate custom RF Region or device or base station profiles created by the Operator in the Operator Manager application from those included in the Actility catalogues, starting TPW release 5.2.2, the custom profiles are by default prefixed with “CUSTOM/“.
The prefix is a system-wide configuration, it may be empty for backward compatibility. This prefix is automatically added by the Operator Manager GUI for each new RF Region or base station or device profile created in release 5.2.2. It is also added by the Operator Manager API; however, to avoid backward incompatibility on the API in case the requesting server doesn’t support processing of the profile ID in the API response, the prefix can be set to “”.
The feature allows an easy differentiation between custom RF Region, device or base station profiles and those included in the catalogues.
The feature is activated by default. It can be deactivated by setting the system-wide prefix to “”.
The length of the profile ID – including the prefix – must not exceed 40 characters.
This enhancement enforces the maximum number of destination URLs per Application Server route, as well as the maximum number of Application Servers per AS Routing Profile.
Before release 5.2.2:
The number of URL destinations per route is not enforced for HTTP Application Servers in Device Manager and Application Manager.
The parameter “Maximum allowed destinations per Application Server route” defined in Connectivity Plan enforces the total number of URL destinations in all Application Servers associated with a AS routing profile. There was no absolute maximum applied to this parameter before release 5.2.2.
In release 5.2.2:
The maximum number of destination URLs per HTTP Application Server is enforced:
For “sequential” strategy: Max 5 URL destinations per route can be defined (can be modifiable by operator-wide configuration).
For “blast” strategy: Max 3 URL destinations per route can be defined (can be modifiable by operator-wide configuration).
IMPORTANT WARNING: Blast mode will be soon deprecated in upcoming ThingPark releases; hence it is strongly recommended that all the new application servers are created using the sequential mode if several URL destinations are required for reliability. In case the routing strategy really requires the blast mode (sending the same uplink frame to N destinations simultaneously), then it is strongly recommended to configure the different destinations as distinct Application Servers and keep only one destination per AS.
The maximum number of AS per AS routing profile is also enforced: « Maximum allowed destinations per Application Server route » is renamed in Connectivity Plan to « Maximum allowed Application Servers », it now controls the maximum number of Application Servers associated with the AS Routing Profile and has an absolute maximum = 5 (modifiable by operator-wide configuration).
This enhancement prevents uncontrolled usage of AS declaration in the routing profile to avoid overloading the LRC.
This feature is automatically activated after migration to release 5.2.2.
Not applicable.
In release 5.2.2 , the following enhancements are supported:
The LRR reports to TWA the firmware version and the FPGA version currently installed on the base station.
The base station profile catalogue is enhanced to provide, for each BS profile, the compatibility matrix between each LRR version and the corresponding firmware and FPGA versions compatible with it.
When providing the list of LRR software packages available for LRR upgrade, TWA Network Manager proposes only the LRR software packages that are compatible with the firmware and FPGA versions currently installed on the target base station, based on the compatibility matrix defined in the base station profile.
NOTE: LRR software upgrade does not support upgrading the firmware or FPGA versions of the gateway in the current release.
To support this enhancement, the base station profile is enriched with the compatibility matrix between LRR version, firmware version and FPGA version as per the following example:
NOTE: If nothing is displayed in Compatible firmware versions and/or Compatible FPGA versions, it means that the LRR software version is compatible with all firmware and FPGA versions of base stations associated with this base station profile.
Thanks to this feature, the LRR upgrade becomes safer without any risk to break the compatibility between the LRR version and the firmware/FPGA versions already installed on the base station, if the compatibility matrix is kept up-to-date on the base station profile.
This feature is automatically activated in release 5.2.2.
Upgrade of base station firmware and/or FPGA software from Network Manager is not supported in the current release.
This feature allows secure provisioning of Cellular secret key (Ki) in Device Manager. It is not supported for HSM usage with Cellular devices. The solution relies on RSA Keys (referred as Exchange Keys):
The user exports an RSA Public Key and uses it to encrypt the device keys
Device keys are provisioned encrypted and decrypted using the RSA Private Key by the TWA.
The following procedure is implemented:
The user exports the TWA RSA public key (TWK[pub]) and uses it to encrypt Ki
The user creates the Device by providing the RSA encrypted Ki
RSA encrypted Ki is decrypted and re-encrypted by TWA using the HSS AES Key (HSK)
AES encrypted Ki is provisioned in HSS (current implementation is used)
This feature ensures that Secret Key (Ki) is not entered in clear text in TPW GUIs/Device Manager API for security purposes.
Clear-text Cellular key provisioning is still supported in Device Manager API/GUI for backward compatibility.
This feature is activated only if cellular is activated in Operator manager.
Not applicable.
The scope of this Story is to remove the provisioning of the Ki of cellular devices from the Device Manager. The rationale is that the Subscriber should not access the Ki.
The SIM cards (IMSI + Ki) must be provisioned by the Connectivity Supplier and Ki must be stored encrypted in ThingPark Wireless database. When cellular device is provisioned in the Device Manager, the SIM card information is linked to the Device using the provided IMSI.
SIM state is also introduced to follow the SIM card lifecycle:
‘Unallocated’: SIM is not allocated to any device
‘Allocated’: SIM is associated with IMEI and provisioned in device manager
‘HSS Provisioned’: SIM is provisioned in HSS
This Story only applies to the case where the HSS provisioning is activated in Operator settings.
SIM card is pre-provisioned by the Connectivity Supplier. Ki is not provided by the Subscriber when provisioning the Device: Ki is deduced using the provided IMSI.
The following procedure is implemented:
The Connectivity Supplier provisions the SIM card by providing IMSI and Ki (clear mode).
SIM card is created in ThingPark Wireless database and linked to the Subscriber’s Operator and the Connectivity Supplier. Ki is stored encrypted using the Operator HSS Key. SIM state is ‘Unallocated’.
The Subscriber provisions the Device by providing IMEI and IMSI.
Device is created in ThingPark Wireless database and linked to the SIM card. SIM state is ‘Allocated’.
Once activated (associated with a Connectivity Plan and an AS Routing Profile), the Device and SIM card are provisioned in the HSS/SPR. SIM state is ‘HSS Provisioned’.
This feature allows operator to not provide the secret keys (Ki) to subscriber neither encrypted nor in clear text.
This feature is activated only if cellular is activated in Operator manager.
Not applicable.
HSMCLIENT is multi-threaded client that allow HSS to communicate with pool of HSM to carry out authentication requests and receive authentication response from HSM.
This feature allows storing secret key and operator key in HSM, which provides robust security as HSM is hardware-based security solution.
This feature is activated only if cellular is activated in Operator manager. This feature also requires the HSM to be activated in the network.
Not applicable.
This feature enables TWA support for HSM. Secured Cellular keys provisioning in HSM mode applies to Cellular provisioning in following case:
Cellular Operator uses Actility HSS and HSM
AND
SIM cards are pre-provisioned by Connectivity Supplier
The following procedure is implemented:
The Connectivity Supplier exports HSM RSA public key (HEK[pub]) of the selected HSM Group and uses it to encrypt the Ki
The Connectivity Supplier imports the SIM cards by providing IMSI , RSA encrypted Ki , HSM Group ID and HEK version
The Subscriber provisions the Device by providing IMEI and IMSI
Device is created in TWA database and linked to the SIM card
RSA encrypted Ki is decrypted and re-encrypted by HSM using the HSM Ki Key (HKI)
AES encrypted Ki is provisioned in HSS
On Authentication Info Request , the HSS provides AES encrypted OpKey and Ki to the HSM for authentication vectors generation
This feature enables TWA support for HSM for EPC Connector.
This feature is activated only if cellular is activated in Operator manager. This feature also requires the HSM to be activated in the network.
The Operator cannot restrict the list of HSM Groups available to the Connectivity Manager.
This feature enables the decoupling of PGW features when different security zones are present in the operator network. For example, the figure below has three security zones in the network.
GREEN Zone: This is the most secure zone which contains HSS/SPR as it stores SIM card database and is responsible for all the authentication
RED Zone: This zone contains elements such as LRC and PGW that face external network.
YELLOW Zone: This zone contains elements that interface between GREEN and RED Zone. These elements are PGW PROV, TWA, License server, etc. This zone acts as an intermediary between RED and GREEN zone as they are not allowed to interact with each other directly.
TWA needs to pass PGW configuration data such as fleet subscription, network context, microflow and enforcement profiles and so on. In such a case, PGW PROV is created which holds such PGW configuration data. There is also additional element called HSS PROXY which allows PGW to make SPR requests from HSS.
Such an architecture allows splitting of different components into different security zones of the network.
This feature allows placing different components in different security zones thus improving the overall security of the solution.
This feature is activated only if cellular is activated in Operator manager.
Not applicable.
ID | Summary |
---|---|
RDTP-4573 | Provide a full interoperability between Actility JS and 3rd party vendor NS, so that an Operator would be able to mix and match different network components for his LoRaWAN network |
RDTP-6179 | Implement the ThingPark OS and ThingPark Wireless API documentation through Swagger UI tool |
This section lists customers issues that are resolved in ThingPark 5.2.2:
ID | Summary |
---|---|
RDTP-6460 | Passive Roaming: During activation away, when sNS receives a PRStartReq with “ULFreq”:903.5, it is answering with PRStartAns with “DLFreq1”:903,5 while it should have been 926.9 with RFRegion US915_8 channels attached. |
RDTP-8569 | UDR report for Network Partner contains connectivity plan from other operators |
RDTP-6661 | Import of the DeviceProfile via the Catalog is setting wrong MinorVersion in the LRC |
RDTP-9268 | nginx-http-proxy-conf-8.0.4-1.el6.noarch - Errors when executing tool-nginx-conf-generator.sh |
RDTP-8795 | ProductVersion field limitation in Device Profile |
RDTP-8265 | [DeviceManager] Sorting rule of the “Model” list box |
RDTP-7604 | [Network Manager] Base stations Alarms sorting on “Creation timestamp” |
RDTP-7490 | Order stuck in Trying state |
RDTP-7056 | Failed to update Device Profile due to invalid productName |
RDTP-6669 | Maximum LRR UUID length allowed is not consistent between Network Manager GUI and OSS-API |
RDTP-6636 | DeviceProfile - RxTimingSetupReqSupport always set to 1 |
RDTP-754 | BS Alarms number on Network Manager GUI does not show correct data counter - impact on map icon (corrected by RDTP-4544) |
RDTP-8628 | Ability to delete unwanted device profiles, base station profiles or RF Regions if they are not used by the Operator and not present in the new catalogue version imported by the Operator |
RDTP-6974 | Downlink message data should be displayed in clear format in Wireless Logger |