Skip to main content

Add data to your client

Data is added to the FLNet Client through connectors. A connector is a saved configuration that describes for specific data input the whole ETL (Extract, Transform, Load) process:

  • The data format (csv/sql database/...)
  • The extraction method (e.g. PostgreSQL extractor logic or a simple csv extractor)
  • The set of column names that should exist after the extraction
  • A set of transformation apps/functions that are applied on a subset of columns
  • Mapping of the (transformed) columns to the nodes in the data standard

Once a connector is set up, it can be run on demand to import or re-import data into a cohort.

The client supports the creation of connectors based on data files. A datafile, e.g. a csv file, is uploaded directly to the client. Based on that a connector is configured once. This connector can be run again later with another file as long as the new file has the same format including column names than the one used for connector creation.

We natively support simple tabular csv/tsv or similiar files as well as .xlsx excel files. Multiple sheets or multiple tabular files are supported.

File Import Settings

The first step is selecting the file to import and specifying how it should be read. The file is uploaded to the client and stored server-side, associated with the cohort. Different settings are displayed based on which filetype is selected from Excel, CSV, JSON. For more information please read the relevant connector documentation:

Furthermore, an extraction app can be used. This can be used also for e.g. import from a database or other, more complex data inputs.

Specify Headers

After the file is read, the detected columns are listed. This step allows reviewing and adjusting column names before mapping:

  • Rename a column to a more meaningful name. Renamed names carry forward into the mapper and into future re-uploads.
  • When Has header row is disabled, columns are assigned numeric names (0, 1, 2, ...) which can be renamed here.

Transformation

Optionally, transformation apps can be selected. Examples for this is e.g. a conversion from pounds to kg.

Mapper

The mapper step assigns each source column to a field in the data standard. The mapper presents a table with all columns and a sample value from the first row of the file.

For each column, the corresponding field in the schema tree is selected using the Map to Field option. When mapping, we show a preview of the normalization and validation later done on one sample value.

Timeseries mapping

When a column contains multiple measurements per patient recorded over time (e.g. serial lab results), it should be mapped as a timeseries.

A timeseries mapping tells the importer that values in that column are time-anchored, so each measurement is stored individually with its point in time rather than overwriting a single value per patient.

To configure a timeseries mapping, use the Map to Visit/Time option for the relevant column. A dialog will ask for:

FieldDescription
VisitThe column that identifies a visit or session. Measurements sharing the same visit ID are grouped together. Useful when multiple readings belong to the same clinical encounter.
TimeThe column in the file that contains the date or datetime of each measurement. Used to order records chronologically and align series across patients.
Timestamp formatThe format of the timestamp values, e.g. yyyy-MM-dd HH:mm:ss or epoch. A list of common formats is provided.

Columns designated as timestamp or visit ID carriers are automatically marked as Timeseries Column in the mapper and cannot themselves be mapped to schema fields, since their role is to provide temporal context rather than clinical values.

Save / Save and Run

Once the mapping is complete, the connector can be saved or saved and run immediately.

  • Save - Stores the connector configuration without importing data. The connector can be run manually at any time from the connector overview.
  • Save and Run - Saves the configuration and display the screen from where the import operation can be initialized for loading data into the cohort.

Re-uploading

When the source data is updated, a new version of the file can be uploaded through the connector without reconfiguring the mapping. The client validates that the column structure of the new file matches the existing configuration before accepting it. If columns do not match, the upload is rejected and the existing data is preserved.

Normalization and validation

For more information about how data is normalized and validated, please check: