1. Introduction
gRPC_kafka receives JSON messages from the industrial connectors and sends them to Nexalis Cloud. Key features:- Secure integration with Kafka using SASL_SSL or mTLS
- Averaging of numeric datapoints over a configurable interval
- Reliable logging, buffering, and error handling
- Preservation of messages with on-disk buffering during outages
2. Configuration
Configuration Files
The 2 following configuration files are provided as templates in the Nexalis Agent release (to be edited to suit your specific needs).config_gRPC_kafka.json→ SASL_SSLconfig_gRPC_kafka_mTLS.json→ mTLS (rename toconfig_gRPC_kafka.jsonif used)
Example: SASL_SSL
SASL credentials must be set in environment variables:
Note: the use of zstd compression type is mandatory when using the Redpanda kafka broker ( other compression types don’t work with Redpanda).
Example: mTLS
Configuration Parameters
Below are the definitions for all available fields inconfig_gRPC_kafka.json.
Logger
- MB_max_file_size (MB): Maximum size of each log file before rotation. Default: 30.
- max_rotated_files: Maximum number of rotated log files to keep. Default: 3.
- level: Logging verbosity. Accepted values include
"debug","info","error". - log_file_path: Path to the log file written by gRPC_kafka.
gRPC Server
- gRPCServerIP: IP address the gRPC server binds to.
- gRPCServerPort: Port the gRPC server listens on.
Kafka Producer
- topicName: Kafka topic provided by Nexalis (account‑scoped).
- brokers: Comma‑separated list of broker endpoints.
- SASL (exclusive with mTLS):
- ssl_ca_location: Path to CA certificates. On Debian,
/usr/lib/ssl/certsis commonly used. - Credentials via environment variables (must be exported before start):
- ssl_ca_location: Path to CA certificates. On Debian,
- sslConfig (mTLS) (exclusive with SASL):
- caLocation: Path to CA chain file (
CA_chain.pem). This file must contain two certificates: e.g., the public AWS root CA and the private Nexalis CA. - certificateLocation: Path to the client certificate.
- privateKeyLocation: Path to the client private key.
- keyPassword: Password protecting the private key (if used).
- caLocation: Path to CA chain file (
Buffer
- bufferStoragePath: Directory for on‑disk buffered messages. Default:
./persistent_buffer. Files are Zstd‑compressed. - max_MB_buffer_size: Maximum total size (in MB) of the buffer directory.
- max_kBs_unbuffering_speed: Cap for replay speed in kB/s. Minimum: 1. Also determines batch sizing (e.g.,
1000→ batches up to ~1 MB before compression).
Batching & Compression
- batchIntervalSeconds: Max time to accumulate messages before send (seconds). Min: 1; Default: 1.
- compression_type: Codec used by the Kafka producer. Default:
zstd. Options:none,gzip,lz4,snappy,zstd.- Redpanda note:
zstdis mandatory when using Redpanda.
- Redpanda note:
- compression_level: Compression level for the selected codec. Ranges:
gzip [0–9],lz4 [0–12],zstd [0–12],snappy = 0,-1uses codec default. Default: 12.
Data Handling
- dataHandler (optional) — controls forwarding behavior:
- highFrequency:
true(default): forward high‑frequency/raw messages.false: disable raw forwarding (aggregates only).
- averagePeriod:
"None": disable averages."Xmin": compute weighted averages over X minutes, where1 ≤ X ≤ 60(e.g.,"5min"). Default:"5min".
- highFrequency:
- message_max_bytes: Max uncompressed size of a single message sent to the cloud. Default: 10 MB; Min: 1 MB; Max: 50 MB. Applies to non‑buffered messages only.
- max_kBs_speed: Max regular (non‑buffered) send rate in kB/s. Default: 1 Gb/s.
3. Average Values
- Computes averages only for numeric data types (integer, float). It does not compute averages for non-numeric types such as boolean or string.
- Includes min (the smallest value recorded during the interval), max (the largest value recorded during the interval), count (The total number of data points used to calculate the average) in
metaData. - Uses weighted averages, accounting for the duration each data point is active and with continuity across intervals (the last value from the previous interval is carried over as the initial value for the next interval).
4. On-Disk Buffering
- Buffers messages on the disk if broker unavailable (ensures that messages are preserved even in the event of a power outage during communication loss, preventing data loss)
- Compressed with Zstd
- Adaptive replay speed (starts slow, scales up, i.e., increases the rate of buffered message sending)
- The
metaDataJSON section marks buffered messages:"buffered": true. If the message was sent in real-time, there is no key buffered, inside the metaData section.
5. Running the gRPC_kafka manually
Normally, gRPC_kafka is started automatically by the Nexalis Agent launcher and requires no manual action. For debugging purposes only, you can run it directly ifconfig_gRPC_kafka.json is in the same directory as the executable:
6. gRPC Service
Messages must be JSON arrays with the following required fields:siteName, deviceID, deviceModel, protocol, dataPoint, value, tsConnector, triggerType.
The message from the gRPC server to the gRPC client contains the result of the forwarding operation:
- success: Boolean indicating whether the forwarding was successful.
- confirmation: Confirmation message if successful.
- error_message: Error message if unsuccessful.
If the message is not correctly formated, gRPC_kafka will reject the message and send an error to the gRPC client.
gRPC_kafka checks that all required keys have non-empty values, except for the value key. The “value” key can be empty for tag discovery purposes.
tsConnector is the number of milliseconds since 1970-01-01 UTC.Example of a valid JSON message:
7. gRPC_kafka – Common Errors and Solutions (Q&A)
Q1: SSL handshake failed
Error message:The certificates configured in the gRPC_kafka configuration file are not correct. Possible causes:
- Typo in the certificate file names.
CA_chain.pemdoes not contain the required 2 certificates (AWS root CA + Nexalis private CA).- Certificates are invalid and new ones must be generated.
Q2: Failed to resolve URL
Error message:The broker URLs in the gRPC_kafka configuration are not properly set.
Q3: Kafka topic authorization failed
Error message:The certificates are not valid and are not authorized to send data to the topic configured in gRPC_kafka.
Q4: Connection timeout
Error message:The Kafka producer cannot establish a connection to the specified Redpanda broker, typically due to firewall restrictions. Solution:
- Ensure all Redpanda broker URLs are provided to the network/security team for firewall rule configuration.
- The timeout interval (every 30 seconds) corresponds to
socket.connection.setup.timeout.msin librdkafka. - Verify connectivity to the brokers using
nc(netcat) ortelnetto check if required ports are accessible.