1.PKanalix documentation
Version 2021
This documentation is for PKanalix.
©Lixoft
PKanalix performs analysis on PK data set including:
 The non compartmental analysis (NCA) – computation of the NCA parameters using the calculation of the \(\lambda_z\) – slope of the terminal elimination phase.
 The bioequivalence analysis (BE) – comparison of the NCA parameters of several drug products (usually two, called ‘test’ and ‘reference’) using the average bioequivalence.
 The compartmental analysis (CA) – estimation of model parameters representing the PK as the dynamics in compartments for each individual. It does not include population analysis performed in Monolix.
What else?
 A clear userinterface with an intuitive workflow to efficiently run the NCA, BE and CA analysis.
 Easily accessible PK models library and autoinitialization method to improve the convergence of the optimization of CA parameters.
 Integrated bioequivalence module to simplify the analysis
 Automatically generated results and plots to give an immediate feedback.
 Interconnection with MonolixSuite applications to export projects to Monolix for the population analysis.
PKanalix tasks
Pkanalix uses the dataset format common for all MonolixSuite applications, see here for more details. It allows to move your project between applications, for instance export a CA project to Monolix and perform a population analysis with one “click” of a button.
Non Compartmental AnalysisThe first main feature of PKanalix is the calculation of the parameters in the Non Compartmental Analysis framework. This task consists in defining rules for the calculation of the \(\lambda_z\) (slope of the terminal elimination phase) to compute all the NCA parameters. This definition can be either global via general rules or manual on each individual – with the interactive plots the user selects or removes points in the \(\lambda_z\) calculation. 
BioequivalenceThe average NCA parameters obtained for different groups (e.g a test and a reference formulation) can be compared using the Bioequivalence task. Linear model definition contains one or several fixed effects selected in an integrated module. It allows to obtain a confidence interval compared to the predefined BE limits and automatically displayed in intuitive tables and plots. 
Compartmental AnalysisThe second main feature of PKanalix is the calculation of the parameters in the Compartmental Analysis framework. It consists in finding parameters of a model that represents the PK as the dynamics in compartments for each individual. This task defines a structural model (based on a userfriendly PK models library) and estimates the parameters for all the individuals. Automatic initialization method improves the convergence of parameters for each individual. 
All the NCA, BE and/or CA outputs are automatically displayed in sortable tables in the Results tab. Moreover, they are exported in the result folder in a Rcompatible format. Interactive plots give an immediate feedback and help to better interpret the results.
The usage of PKanalix is available not only via the user interface, but also via R with a dedicated Rpackage (detailed here). All the actions performed in the interface have their equivalent Rfunctions. It is particularly convenient for reproducibility purpose or batch jobs.
The results of the NCA calculations and bioequivalence calculations have been compared on an extensive number of datasets to the results of WinNonLin and to published results obtained with SAS. All results were identical. See the poster below for more details.
2.Data format for NCA and CA analysis
The data set format used in PKanalix is the same as for the entire MonolixSuite, to allow smooth transitions between applications. The dosing information and the concentration measurements are recorded in a single data set. The dose information must be indicated for each individual, even if it is identical for all.
A data set typically contains the following columns: ID, TIME, OBSERVATION, AMOUNT. For IV infusion data, an additional INFUSION RATE or INFUSION DURATION is necessary. For steadystate data, STEADYSTATE and INTERDOSE INTERVAL column are added. Cells that do not contain information (e.g AMOUNT column on a line recording a measurement) have a dot. If a dose and a measurement occur at the same time, they can be encoded on the same line or on different lines. Sort and carry variables can be defined using the OCCASION, and CATEGORICAL COVARIATE and CONTINUOUS COVARIATE columntypes. BLQ data are defined using the CENSORING and LIMIT columntypes.
Headers are free but the columns must be assigned to one of the available columntypes. The full list of columntypes is available at the end of this page and a detailed description is given on the dataset documentation website.
Units are not recorded in PKanalix and it is not possible to convert the original data to adapt the units. It is the user’s responsibility to ensure that units are consistent in the data set (for instance if the amount is in mg, the concentrations in mg/L, then the calculated volumes will be in L). Similarly, the indicated amount must be an absolute value. It is not possible to give in a dose in mg/kg and let the software calculate the dose in mg based on the weight. This calculation must be done outside PKanalix.
Note that the NCA application does not allow to have in the dataset several observations at the same time for the same id.
Below we show how to encode the date set for most typical situations.
 Plasma concentration data
 Steadystate data
 BLQ data
 Urine data
 Occasions (“Sort” variables)
 Covariates (“Carry” variables)
 Other useful columntypes
 Overview of all columntypes
 Excel macro to adapt data format for PKanalix
 Filtering data
 Units
Plasma concentration data
Extravascular
For extravascular administration, the mandatory columntypes are ID (individual identifiers as integers or strings), OBSERVATION (measured concentrations), AMOUNT (dose amount administered) and TIME (time of dose or measurement).
To distinguish the extravascular from the IV bolus case, in “Tasks>Run” the administration type must be set to “extravascular”.
If no measurement is recorded at the time of the dose, a concentration of zero is added for single dose data, the minimum concentration observed during the dose interval for steadystate data.
Example:
 demo project_extravascular.pkx
This data set records the drug concentration measured after single oral administration of 150 mg of drug in 20 patients. For each individual, the first line records the dose (in the “Amount” column tagged as AMOUNT columntype) while the following lines record the measured concentrations (in the “Conc” column tagged as OBSERVATION). Cells of the “Amount” column on measurement lines contain a dot, and respectively for the concentration column. The column containing the times of measurements or doses is tagged as TIME columntype and the subject identifiers, which we will use as sort variable, are tagged as ID. Check the OCCASION section if more sort variables are needed. After accepting the dataset, the data is automatically assigned as “Plasma”.
In the “Tasks/Run” tab, the user must indicate that this is extravascular data. In the “Check lambda_z”, on linear scale for the yaxis, measurements originally present in the data are shown with full circles. Added data points, such as a zero concentration at the dose time, are represented with empty circles. Points included in the \(\lambda_z\) calculation are highlighted in blue.
After running the NCA analysis, PK parameters relevant to extravascular administration are displayed in the “Results” tab.
IV infusion
Intravenous infusions are indicated in the data set via the presence of an INFUSION RATE or INFUSION DURATION columntype, in addition to the ID (individual identifiers as integers or strings), OBSERVATION (measured concentrations), AMOUNT (dose amount administered) and TIME (time of dose or measurement). The infusion duration (or rate) can be identical or different between individuals.
In “Tasks>Run” the administration type must be set to “intravenous”.
If no measurement is recorded at the time of the dose, a concentration of zero is added for single dose data, the minimum concentration observed during the dose interval for steadystate data.
Example:
 demo project_ivinfusion.pkx:
In this example, the patients receive an iv infusion over 3 hours. The infusion duration is recorded in the column called “TINF” in this example, and tagged as INFUSION DURATION.
In the “Tasks/Run” tab, the user must indicate that this is intravenous data.
IV bolus
For IV bolus administration, the mandatory columntypes are ID (individual identifiers as integers or strings), OBSERVATION (measured concentrations), AMOUNT (dose amount administered) and TIME (time of dose or measurement).
To distinguish the IV bolus from the extravascular case, in “Tasks>Run” the administration type must be set to “intravenous”.
If no measurement is recorded at the time of the dose, the concentration of at time zero is extrapolated using a loglinear regression of the first two data points, or is taken to be the first observed measurement if the regression yields a slope >= 0. See the calculation details for more information.
Example:
 demo project_ivbolus.pkx:
In this data set, 25 individuals have received an iv bolus and their plasma concentration have been recorded over 12 hours. For each individual (indicated in the column “Subj” tagged as ID columntype), we record the dose amount in a column “Dose”, tagged as AMOUNT columntype. The measured concentrations are tagged as OBSERVATION and the times as TIME. Check the OCCASION section if more sort variables are needed in addition to ID. After accepting the dataset, the data is automatically assigned as “plasma”.
In the “Tasks/Run” tab, the user must indicate that this is intravenous data. In the “Check lambda_z”, measurements originally present in the data are shown with full circles. Added data points, such as the C0 at the dose time, are represented with empty circles. Points included in the \(\lambda_z\) calculation are highlighted in blue.
After running the NCA analysis, PK parameters relevant to iv bolus administration are displayed in the “Results” tab.
Steadystate
Steadystate must be indicated in the data set by using the STEADY STATE columntype:
 “1” indicates that the individual is already at steadystate when receiving the dose. This implicitly assumes that the individual has received many doses before this one.
 “0” or ‘.’ indicates a single dose.
The dosing interval (also called tau) is indicated in the INTERDOSE INTERVAL, on the lines defining the doses.
Steady state calculation formulas will be applied for individuals having a dose with STEADY STATE = 1. A data set can contain individuals which are at steadystate and some which are not.
If no measurement is recorded at the time of the dose, the minimum concentration observed during the dose interval is added at the time of the dose for extravascular and infusion data. For iv bolus, a regression using the two first data points is performed. Only measurements between the dose time and dose time + interdose interval will be used.
Examples:
 demo project_steadystate.pkx:
In this example, the individuals are already at steadystate when they receive the dose. This is indicated in the data set via the column “SteadyState” tagged as STEADY STATE columntype, which contains a “1” on lines recording doses. The interdose interval is noted on those same line in the column “tau” tagged as INTERDOSE INTERVAL. When accepting the data set, a “Settings” section appears, which allows to define the number of steadystate doses. This information is relevant when exporting to Monolix, but not used in PKanalix directly.
After running the NCA estimation task, steadystate specific parameters are displayed in the “Results” tab.
BLQ data
Below the limit of quantification (BLQ) data can be recorded in the data set using the CENSORING column:
 “0” indicates that the value in the OBSERVATION column is the measurement.
 “1” indicates that the observation is BLQ.
The lower limit of quantification (LOQ) must be indicated in the OBSERVATION column when CENSORING = “1”. Note that strings are not allowed in the OBSERVATION column (except dots). A different LOQ value can be used for each BLQ data.
When performing an NCA analysis, the BLQ data before and after the Tmax are distinguished. They can be replaced by:
 zero
 the LOQ value
 the LOQ value divided by 2
 or considered as missing
For a CA analysis, the same options are available, but no distinction is done between before and after Tmax. Once replaced, the BLQ data are handled as any other observation.
A LIMIT column can be added to record the other limit of the interval (in general zero). This value will not be used by PKanalix but can facilitate the transition from an NCA/CA analysis PKanalix to a population model with Monolix.
Note: the proper encoding of BLQ data can easily be done using Excel or R.
With R, the “CENSORING” column can be added to an existing “data” data frame using data$CENS < ifelse(data$CONC=="BQL", 1, 0)
, for the case where the observation column is called CONC and BLQ data are originally recorded in this column with the string “BQL”. Then replace the “BQL” string by the LOQ value (here 0.2 for instance): data$CONC < ifelse(data$CONC=="BQL", 0.2, data$CONC)
.
With Excel, type for instance =IF(E2="BQL",1,0)
(assuming the column containing the observations is E) to generate the first value of the “CENSORING” column and then propagate to the entire column. Finally, replace the “BQL” string by the LOQ value.
Examples:
 demo project_censoring.pkx: two studies with BLQ data with two different LOQ
In this dataset, the measurements of two different studies (indicated in the STUDY column, tagged as CATEGORICAL COVARIATE in order to be carried over) are recorded. For the study 101, the LOQ is 1.8 ug/mL, while it is 1 ug/mL for study 102. The BLQ data are marked with a “1” in the BLQ column, which is tagged as CENSORING. The LOQ values are indicated for each BLQ in the CONC column of measurements, tagged as OBSERVATION.
In the “Task>NCA>Run” tab, the user can choose how to handle the BLQ data. For the BLQ data before and after the Tmax, the BLQ data can be considered as missing (as if this data set row would not exist), or replaced by zero (default before Tmax), the LOQ value or the LOQ value divided by 2 (default after Tmax). In the “Check lambda_z” tab, the BLQ data are shown in green and displayed according to the replacement value.
For the CA analysis, the replacement value for all BLQ can be chosen in the settings of the “Run” tab (default is Missing). In the “Check init.” tab, the BLQ are again displayed in green, at the LOQ value (irrespective of the chosen method for the calculations).
Urine data
To work with urine data, it is necessary to record the time and amount administered, the volume of urine collected for each time interval, the start and end time of the intervals and the drug concentration in a urine sample of each interval. The time intervals must be continuous (no gaps allowed).
In PKanalix, the start and end times of the intervals are recorded in a single column, tagged as TIME columntype. In this way, the end time of an interval automatically acts as start time for the next interval. The concentrations are recorded in the OBSERVATION column. The volume column must be tagged as REGRESSOR column type. This general columntype of MonolixSuite data sets allows to easily transition to the other applications of the Suite. As several REGRESSOR columns are allowed, the user can select which REGRESSOR column should be used as volume. The concentration and volume measured for the interval [t1,t2] are noted on the t2 line. The volume value on the dose line is meaningless, but it cannot be a dot. We thus recommend to set it to zero.
A typical urine data set has the following structure. A dose of 150 ng has been administered at time 0. The first sampling interval spans from the dose at time 0 to 4h postdose. During this time, 410 mL of urine have been collected. In this sample, the drug concentration is 112 ng/mL. The second interval spans from 4h to 8h, the collected urine volume is 280 mL and its concentration is 92 ng/mL. The third interval is marked on the figure: 390mL of uring have been collected from 8h to 12h.
The given data is used to calculate the intervals midpoints, and the excretion rates for each interval. This information is then used to calculate the \(\lambda_z\) and calculate urinespecific parameters. In “Tasks/Check lambda_z”, we display the midpoints and excretion rates. However, in the “Plots>Data viewer”, we display the measured concentrations at the end time of the interval.
Example:
 demo project_urine.pkx: urine PK dataset
In this urine PK data set, we record the consecutive time intervals in the “TIME” column tagged as TIME. The collected volumes and measured concentration are in the columns “VOL” and “CONC”, respectively tagged as REGRESSOR and OBSERVATION. Note that the volume and concentration are indicated on the line of the interval end time. The volume on the first line (start time of the first interval, as well as dose line) is set to zero as it must be a double. This value will not be used in the calculations. Once the dataset is accepted, the observation type must be set to “urine” and the regressor column corresponding to the volume defined.
In “Tasks>Check lambda_z”, the excretion rate are plotted on the midpoints time for each individual. The choice of the lambda_z works as usual.
Once the NCA task has run, urinespecific PK parameters are displayed in the “Results” tab.
Occasions (“Sort” variables)
The main sort level are the individuals indicated in the ID column. Additional sort levels can be encoded using one or several OCCASION column(s). OCCASION columns contain integer values that permit to distinguish different time periods for a given individual. The time values can restart at zero or continue when switching from one occasion to the next one. The variables differing between periods, such as the treatment for a crossover study, are tagged as CATEGORICAL or CONTINUOUS COVARIATES (see below). The NCA and CA calculations will be performed on each IDOCCASION combination. Each occasion is considered independent of the other occasions (i.e a washout is applied between each occasion).
Note: occasions columns encoding the sort variables as integers can easily be added to an existing data set using Excel or R.
With R, the “OCC” column can be added to an existing “data” data frame with a column “TREAT” using data$OCC < ifelse(data$TREAT=="ref", 1, 2)
.
With Excel, assuming the sort variable is encoded in the column E with values “ref” and “test”, type =IF(E2="ref",1,2)
to generate the first value of the “OCC” column and then propagate to the entire column:
Examples:
 demo project_occasions1.pkx: crossover study with two treatments
The subject column is tagged as ID, the treatment column as CATEGORICAL COVARIATE and an additional column encoding the two periods with integers “1” and “2” as OCCASION column.
In the “Check lambda_z” (for the NCA) and the “Check init.” (for CA), each occasion of each individual is displayed. The syntax “1#2” indicates individual 1, occasion 2, according to the values defined in the ID and OCCASION columns.
In the “Individual estimates” output tables, the first columns indicate the ID and OCCASION (reusing the data set headers). The covariates are indicated at the end of the table. Note that it is possible to sort the table by any column, including, ID, OCCASION and COVARIATES.
The OCCASION values are available in the plots for stratification, in addition to possible CATEGORICAL or CONTINUOUS COVARIATES (here “TREAT”).
 demo project_occasions2.pkx: study with two treatments and with/without food
In this example, we have three sorting variables: ID, TREAT and FOOD. The TREAT and FOOD columns are duplicated: once with strings to be used as CATEGORICAL COVARIATE (TREAT and FOOD) and once with integers to be used as OCCASION (occT and occF).
In the individual parameters tables and plots, three levels are visible. In the “Individual parameters vs covariates”, we can plot Cmax versus FOOD, split by TREAT for instance (Cmax versus TREAT split by FOOD is also possible).
Covariates (“Carry” variables)
Individual information that need to be carried over to output tables and plots must be tagged as CATEGORICAL or CONTINUOUS COVARIATES. Categorical covariates define variables with a few categories, such as treatment or sex, and are encoded as strings. Continuous covariates define variables on a continuous scale, such as weight or age, and are encoded as numbers. Covariates will not automatically be used as “Sort” variables. A dedicated OCCASION column is necessary (see above).
Covariates will automatically appear in the output tables. Plots of estimated NCA and/or CA parameters versus covariate values will also be generated. In addition, covariates can be used to stratify (split, color or filter) any plot. Statistics about the covariates distributions are available in table format in “Results > Cov. stat.” and in graphical format in “Plots > Covariate viewer”.
Note: It is preferable to avoid spaces and special characters (stars, etc) in the strings for the categories of the categorical covariates. Underscores are allowed.
Example:
 demo project_covariates.pkx
In this data set, “SEX” is tagged as CATEGORICAL COVARIATE and “WEIGHT” as CONTINUOUS COVARIATE.
The “cov stat” table permits to see a few statistics of the covariate values in the data set. In the plot “Covariate viewer”, we see that the distribution of weight is similar for males and females.
After running the NCA and CA tasks, both covariates appear in the table of individual estimated parameters estimated.
In the plot “parameters versus covariates”, the parameters values are plotted as scatter plot with the parameter value (here Cmax and AUCINF_pred) on on yaxis and the weight value on the xaxis, and as boxplots for sex female and sex male.
All plots can be stratified using the covariates. For instance, the “Data viewer” can be colored by weight after having created 3 weight groups. Below we also show the plot “Distribution of the parameters” split by sex with selection of the AUCINF_pred parameter.
Other useful columntypes
In addition to the typical cases presented above, we briefly present a few additional columntypes that may be convenient.
 ignoring a line: with the IGNORED OBSERVATION (ignores the measurement only) or IGNORED LINE (ignores the entire line, including regressor and dose information)
 working with several types of observations: different type of observations can be recorded in the same dataset (for instance parent and metabolite concentrations). They are distinguished using the OBSERVATION ID columntype. One of the observation ids is then selected in the “Tasks>Run” section to perform the calculations.
 working with several types of administrations: a single data set can contain different types of administrations, for instance IV bolus and extravascular, distinguished using the ADMINISTRATION ID columntype. The setting “administration type” in “Tasks>Run” can be chosen separately for each administration id, and the appropriate parameter calculations will be performed.
Overview of all columntypes
Columntypes used for all types of lines:
 ID (mandatory): identifier of the individual
 OCCASION (formerly OCC): identifier (index) of the occasion
 TIME (mandatory): time of the dose or observation record
 DATE/DAT1/DAT2/DAT3: date of the dose or observation record, to be used in combination with the TIME column
 EVENT ID (formerly EVID): identifier to indicate if the line is a doseline or a responseline
 IGNORED OBSERVATION (formerly MDV): identifier to ignore the OBSERVATION information of that line
 IGNORED LINE (from 2019 version): identifier to ignore all the informations of that line
 CONTINUOUS COVARIATE (formerly COV): continuous covariates (which can take values on a continuous scale)
 CATEGORICAL COVARIATE (formerly CAT): categorical covariate (which can only take a finite number of values)
 REGRESSOR (formerly X): defines a regression variable, i.e a variable that can be used in the structural model (used e.g for timevarying covariates)
 IGNORE: ignores the information of that column for all lines
Columntypes used for responselines:
 OBSERVATION (mandatory, formerly Y): records the measurement/observation for continuous, count, categorical or timetoevent data
 OBSERVATION ID (formerly YTYPE): identifier for the observation type (to distinguish different types of observations, e.g PK and PD)
 CENSORING (formerly CENS): marks censored data, below the lower limit or above the upper limit of quantification
 LIMIT: upper or lower boundary for the censoring interval in case of CENSORING column
Columntypes used for doselines:
 AMOUNT (mandatory, formerly AMT): dose amount
 ADMINISTRATION ID (formerly ADM): identifier for the type of dose (given via different routes for instance)
 INFUSION RATE (formerly RATE): rate of the dose administration (used in particular for infusions)
 INFUSION DURATION (formerly TINF): duration of the dose administration (used in particular for infusions)
 ADDITIONAL DOSES (formerly ADDL): number of doses to add in addition to the defined dose, at intervals INTERDOSE INTERVAL
 INTERDOSE INTERVAL (formerly II): interdose interval for doses added using ADDITIONAL DOSES or STEADYSTATE column types
 STEADY STATE (formerly SS): marks that steadystate has been achieved, and will add a predefined number of doses before the actual dose, at interval INTERDOSE INTERVAL, in order to achieve steadystate
Excel macro to adapt data format for PKanalix
We provide an Excel macro that can be used to adapt the formatting of your dataset for use in PKanalix. The macro takes as input the original dataset, in Excel or text format, and the dosing information separately and produces an adapted dataset with the doses in a new column. The macro can also translate information on missing observations, censored observations, urine volume, and occasions into the PKanalixcompatible format.
2.1.Excel macro to adapt data format for PKanalix
We provide an Excel macro that can be used to adapt the formatting of your dataset for use in PKanalix:
Download the Excel macro to adapt data formatting for PKanalix (version March 21st 2022)
In case of download issue: the download may be blocked by your browser which identifies this type of file as potential threat. Rightclick the link and choose “Save link as”. After validating, if the browser displays “the file can’t be downloaded securely”, click on the arrow to display more options and choose “Keep”. Once the file is downloaded, rightclick the file, open its Properties and select the Unblock checkbox at the bottom of the General tab, and select OK.
The macro is not compatible with Excel on Mac.
The macro takes as input the original dataset, in Excel or text format, adds the dosing information separately and produces an adapted dataset with the doses in a new column. The macro can also translate information on missing observations, censored observations, urine volume, and occasions into the PKanalixcompatible format. The adapted dataset is saved as an Excel file and, if selected, as a CSV file for direct use in PKanalix.
More details of the actions performed by the macro are described below.
Steps to follow to adapt the data format:
 Click on the button “Select file to adapt” and choose the dataset file to open in Excel
 Fill the form to give information on the dataset:– In case of several sheets, select the name of the sheet to adapt
 Select the columns containing ID, TIME and OBSERVATION information
 If individual concentrations should be split in different profiles by occasions, check the option “Occasions: by sort variables”, select one or two levels, and select the column(s) containing the OCCASION information
 If individual concentrations should be split by intervals of successive times, check the option “Occasion: by time intervals”
 In case of urine data, change the corresponding radio button and select the columns containing each collection interval START TIME and END TIME, and the column containing VOLUME information.
 Give treatment information: single/multiple dose, time of first dose, dose amount (individual dose amounts can be read from a column of the dataset), number of doses, interdose interval, and infusion duration in case of IV infusion.
 In case of censored observations, indicate the tag(s) representing censored observations in the dataset if it is different than the default tags (several tags can be entered, separated by commas without spaces), and give the value of LLOQ or the header of a column containing LLOQ values.
 Click on “Format data” and wait for the result.
Result:
 The adapted dataset is saved in a new .xlsx file with a name derived from the original name or userspecified.
 If requested in the form, the adapted dataset is also saved in a new .CSV file. If the initial file has several sheets, the sheet name is added to the CSV file name.
 If the file already exists, a new sheet is added to the file, unless a sheet with the same name already exists.
 All changes from the original dataset are highlighted with colors:
— blue = dosing time or amount,
— gray = missing observation,
— orange = censored observations,
— green = urine volume at dosing time,
— yellow = new occasions as integers.
Actions performed by the macro:
 A column AMT is added, containing the same dosing regimen for all subjects or subjectoccasions (with an option to read individual dose amounts from a column of the dataset). The column is colored in blue, as well as dosing times. Note that the dosing times are correct in the column tagged as TIME in the form, but other time columns will not be consistent.
 In case of urine data, the volume is set to 0 at dosing times (colored in green) to avoid format errors, and both start and end times of the interval are set to the dosing time. The macro checks that time intervals are continuous (no gaps allowed). Only the column END TIME should be used as TIME in PKanalix.
 In case of censored observations, a column CENS is added and colored in orange, containing 0 for noncensored observations and 1 for censored observations. The LLOQ is written instead of the censoring tag in the column of observations and colored as well.
 Missing observations (all values that are not numbers) are replaced by “.” and colored in gray.
 In case of occasions, for each column tagged as containing occasions, a new column with the same header concatenated with “_OCC” is added with occasion numbers and colored in yellow. If the column tagged already contained numbers, this just duplicates the column, but the initial column can then be used as categorical covariate.
 In case of occasions by time intervals, two identical columns named Interval_OCC and Interval_CATCOV are created next to the column TIME, with indexes of intervals of successive times for each individual.
 The first lines that do not contain data are concatenated into a single header row.
 Rows that do not contain data are deleted.
 Formatting is adapted for correct exporting to CSV format.
Examples
 Single doses: the original data on the left contains concentration data. All individuals have received single doses at time 0, with individual dose amounts in the column Dose. The macro is used to add the dosing times. In the form, the columns Subject, Time, Conc are tagged as ID, TIME and OBSERVATION respectively. The column Dose is indicated in the form as containing the dose amounts, and the single dosing time is also specified in the form. In addition to adding the dosing times and the corresponding amounts in a new column AMT, the macro replaces the missing observations (any value in Conc that is not a number) by . compatible with PKanalix. The macro form can also be used to specify that <BLQ> is a flag for censored observations rather than missing ones, with a LLOQ of 1. In that case the macro adds a new column CENS and replaces <BLQ> by the LLOQ. Finally, the second line containing units is removed from the adapted dataset, because it can not be read by PKanalix.
 Multiple doses: the original data on the left contains only ids, measurement times and concentrations. The macro is used to add doses. In the macro form, the columns Id, Time and Y are tagged as ID, TIME and OBSERVATION. A multiple dose regimen is specified with 5 doses starting at time 48 with interdose interval of 12 and level of 150. Running the macro with these settings adds a column AMT with 5 doses per individual.
 Censored observations: on the example below (original data on the left, adapted data on the right), the column Period already contains occasions as integers, corresponding to the different values of FORM. Censored observations are flagged with “BLQ” in the column DV. The macro is used with the following settings: ID, time, DV and Period are tagged respectively as ID, TIME, OBSERVATION and OCCASION. A single dose is specified with value 600 at time 0, and the specified LLOQ value is 0.06. Running the macro adds two new columns AMT and CENS containing the doses and the censored observations flags. “BLQ” is replaced by 0.06 in the column DV. The column Period is left untouched since it is already compatible with PKanalix.
 Censored observations with LLOQ read from data: on the next example, the LLOQ value is not given manually in the form, but the option “read LLOQ from data” is selected. On each line, the line in the column LLOQ (selected in the form) is used as LLOQ value if the value on the line is censored. Notice that the value can be extracted from a string: all nonnumeric characters are removed except “.”.
 Occasions by sort variable: the original data on the left contains a sort variable (or timevarying covariate) “Treatment”, which should be defined in different occasions for PKanalix. Running the macro with “Treatment” tagged as “Occasion” creates a new column OCC in the adapted dataset with different integer occasions corresponding to the different values of “Treatment”. Moreover, the dose level is read for each subjectoccasion from the column “Dose”. Here a single dosing time at 0 has been specified in the macro form. Finally, the second line containing units is removed from the adapted dataset, because it can not be read by PKanalix.
 Occasions by time intervals: the original data on the left contains concentration profiles measured on different days with the time after the last dose in hours. Two doses have been given on days 1 and 4, and the drug concentration is measured on days 1 to 6. Selecting the option “Occasions as time intervals” in the form creates the columns Interval_CATCOV and Interval_OCC that distinguish the concentration profiles with the indexes of intervals of successive times. Lines are assigned to a time interval by order, with a change of interval when the id changes or when the time does not increase between a line and the next. Here it is not necessary to select the column Analyte as OCCASION (or sort variable): since concentration profiles from parent and metabolite are separated in order with different intervals of successive times, they are also distinguished with the Interval columns. The form should also by used to specify the doses (single dose at time 0 with amount 5) and the LLOQ (0.5) with BLQ as censoring tag.
 Urine data: the original data on the left contains urine concentration measurements in the column CONC, with the urine volume in VOL collected during the interval defined by START TIME and END TIME. Using the macro to add a single dose of 150 at time 0 to all individuals produces the data on the right. Since VOL is tagged as urine volume in the macro, the value 0 for VOL is added at dosing times, and the macro checks that the intervals are continuous. The missing concentration for the interval between times 24 and 48 for id 2 is replaced by a . compatible with PKanalix.
2.2.Units: display and scaling
 It displays units of the NCA and CA parameters calculated automatically from information provided by a user about units of measurements in a dataset. Units are shown in the results tables, on plots and are added to the saved files.
 It allows for scaling values of a dataset to represent outputs in preferred units, which facilitates the interpretation of the results. Scaling is done in the PKanalix data manager, and does not change the original data file.
 Units definition
 Units of the NCA parameters
 Units of the CA parameters
 Units display
 Units preferences
Units definition
Units of the NCA and CA parameters are considered as combinations of units of: time, amount and volume. For instance, unit of AUC is [concentration unit * time unit] = [amount unit / volume unit * time unit]. These quantities correspond to columns in a typical PK dataset: time of measurements, observed concentration, dose amount (and volume of collected urine when relevant).
The “Units” block allows to define the preferred units for the output NCA parameters (purple frame below), which are related to the units of the data set columns (green frame below) via scaling factors (in the middle). The output units are displayed in results and plots after running the NCA and CA tasks.
Concentration is a quantity defined as “amount per volume”. It has two separate units, which are linked (and equal) to AMT and VOLUME respectively. Changing the amount unit of CONC will automatically update the amount unit of AMT. This constraint allows to apply simplifications to the output parameters units, for instance have Cmax_D as [mL] and not [mg/mL/mg].
The amount unit can only be defined as a mass unit. This document explains how to adapt the parameter units if the amount unit in the data is actually “/kg”.
Units without data conversion: Output units correspond to units of the input data.
In other words, desired units of the NCA and CA parameters correspond to units of measurements in the dataset. In this case, select from the dropdown menus units of the input data and keep the default scaling factor (equal to one). All calculations are performed using values in the original dataset and selected units are displayed in the results tables and on plots.
Units conversion: output units are different from the units of the input data.
NCA and CA parameters can be calculated and displayed in any units, not necessarily the same as used in the dataset. The scaling factors (by default equal to 1) multiply the corresponding columns in a dataset and transform them to a dataset in new units. PKanalix shows data with new, scaled values after having clicked on the “accept” button. This conversion occurs only internally and the original dataset file remains unchanged. So, in this case, select desired output units from the list and, knowing units of the input data, scale the original values to the new units.
Example 1: input time units in [hours] and desired output units in [days]
For instance, let measurement times in a dataset be in hours. To obtain the outputs in days, set the output time unit as “days” and the scaling factor to 1/24, as shown below. It reads as follows:
(values of time in hours from the dataset) * (1/24) = time in days.
After accepting the scaling, a dataset shown in the Data tab is converted internally to a new data set. It contains original values multiplied by scaling factors. Then, all computations in the NCA and CA tasks are performed using new values, so that the results correspond to selected units.
Example 2: input data set concentration units in [ng/mL] and amount in [mg]
Let’s assume that we have a data set where the concentration units are [ng/mL]=[ug/L] and the dose amount units are [mg], as presented above. It is not possible to indicate these units directly, as the amount unit of CONC and AMT must be the same. One option would be to indicate the CONC as [mg/kL] and the AMT as [mg] but having the volume unit as [kL] is not very convenient. We will thus use the scaling factors to harmonize the units of the concentration and the amount.
If we would like to have the output NCA parameters using [ug] and [L], we can define the CONC units as [ug/L] (as the data set input units) and scale the AMT to convert the amount column from [mg] to [ug] unit with a scaling factor of 1000. After clicking “accept” on the bottom right, the values in the amount column of the (internal) data set have been multiplied by 1000 by PKanalix such that they are now in [ug].
If we would like to have the output NCA parameters using [ng] and [mL], we can define the CONC units as [ng/mL] (as the data set input units) and scale the AMT to convert the amount column from [mg] to [ng] unit with a scaling factor of 1000000.
Units for the NCA parameters
Rsq  no unit 
Rsq_adjusted  no unit 
Corr_XY  no unit 
No_points_lambda_z  no unit 
Lambda_z  time1 
Lambda_z_lower  time 
Lambda_z_upper  time 
HL_lambda_z  time 
Span  no unit 
Lambda_z_intercept  no unit 
T0  time 
Tlag  time 
Tmax_Rate  time 
Max_Rate  amount.time1 
Mid_Pt_last  time 
Rate_last  amount.time1 
Rate_last_pred  amount.time1 
AURC_last  amount 
AURC_last_D  no unit 
Vol_UR  volume 
Amount_recovered  amount 
Percent_recovered  % 
AURC_all  amount 
AURC_INF_obs  amount 
AURC_PerCentExtrap_obs  % 
AURC_INF_pred  amount 
AURC_PerCentExtrap_pred  % 
C0  amount.volume1 
Tmin  time 
Cmin  amount.volume1 
Tmax  time 
Cmax  amount.volume1 
Cmax_D  volume1 
Tlast  time 
Clast  amount.volume1 
AUClast  time.amount.volume1 
AUClast_D  time.volume1 
AUMClast  time2.amount.volume1 
AUCall  time.amount.volume1 
AUCINF_obs  time.amount.volume1 
AUCINF_D_obs  time.volume1 
AUCINF_pred  time.amount.volume1 
AUCINF_D_pred  time.volume1 
AUC_PerCentExtrap_obs  % 
AUC_PerCentBack_Ext_obs  % 
AUMCINF_obs  time2.amount.volume1 
AUMC_PerCentExtrap_obs  % 
Vz_F_obs  volume 
Cl_F_obs  volume.time1 
Cl_obs  volume.time1 
Cl_pred  volume.time1 
Vss_obs  volume 
Clast_pred  amount.volume1 
AUC_PerCentExtrap_pred  % 
AUC_PerCentBack_Ext_pred  % 
AUMCINF_pred  time2.amount.volume1 
AUMC_PerCentExtrap_pred  % 
Vz_F_pred  volume 
Cl_F_pred  volume.time1 
Vss_pred  volume 
Tau  time 
Ctau  amount.volume1 
Ctrough  amount.volume1 
AUC_TAU  time.amount.volume1 
AUC_TAU_D  time.volume1 
AUC_TAU_PerCentExtrap  % 
AUMC_TAU  time2.amount.volume1 
Vz  volume 
Vz_obs  volume 
Vz_pred  volume 
Vz_F  volume 
CLss_F  volume.time1 
CLss  volume.time1 
CAvg  amount.volume1 
FluctuationPerCent  % 
FluctuationPerCent_Tau  % 
Accumulation_index  no unit 
Swing  no unit 
Swing_Tau  no unit 
Dose  amount 
N_Samples  no unit 
MRTlast  time 
MRTINF_obs  time 
MRTINF_pred  time 
Units for the PK parameters
Units are available only in PKanalix. When exporting a project to Monolix, values of PK parameters are reconverted to the original dataset unit. Below, volumeFactor is defined implicitly as: volumeFactor=amtFactor/concFacto, where “factor” is the scaling factor used in the “Units” block in the Data tab.
PARAMETER  UNIT  INVERSE of UNITS 
V (V1, V2, V3)  volume  value/volumeFactor 
k (ka, k12, k21, k31, k13, Ktr)  1/time  value*timeFactor 
Cl, Q (Q2, Q3)  volume/time  value*timeFactor/volumeFactor 
Vm  amount/time  value*timeFactor/amountFactor 
Km  concentration  value/concFactor 
T (Tk0, Tlag, Mtt)  time  value/timeFactor 
alpha, beta, gamma  1/time  value*timeFactor 
A, B, C  1/volume  value*volumeFactor 
Units display
To visualise units, switch on the “units toggle” and accept a dataset. Then, after running the NCA and CA tasks, units are displayed:
 In the results table next to the parameter name (violet frame), in the table copied with “copy” button (blue frame) and in the saved .txt files in the result folder (green frame).
 On plots if the “units” display is switched on in the General plots settings
Units preferences
The units selected by default when starting a new PKnaalix project can be chosen in the menu Settings > Preferences.
2.3.Filtering a data set
Starting on the 2020 version, filtering a data set to only take a subpart into account in your modelization is possible. It allows to make filters on some specific IDs, times, measurement values,… It is also possible to define complementary filters and also filters of filters. It is accessible through the filters item on the data tab.
 Creation of a filter
 Filtering actions
 Filters with several actions
 Other filers: filter of filter and complementary filters
Creation of a filter
To create a filter, you need to click on the data set name. You can then create a “child”. It corresponds to a subpart of the data set where you will define your filtering actions.
You can see on the top (in the green rectangle) the action that you will complete and you can CANCEL, ACCEPT, or ACCEPT & APPLY with the bottoms on the bottom.
Filtering a data set: actions
In all the filtering actions, you need to define
 An action: it corresponds to one of the following possibilities: select ids, remove ids, select lines, remove lines.
 A header: it corresponds to the column of the data set you wish to have an action on. Notice that it corresponds to a column of the data set that was tagged with a header.
 An operator: it corresponds to the operator of choice (=, ≠, < ≤, >, or ≥).
 A value. When the header contains numerical values, the user can define it. When the header contains strings, a list is proposed.
For example, you can
 Remove the ID 1 from your study:
In that case, all the IDs except ID = 1 will be used for the study.  Select all the lines where the time is less or equal 24:
In that case, all lines with time strictly greater that 24 will be removed. If a subject has no measurement anymore, it will be removed from the study.  Select all the ids where SEX equals F:
In that case, all the male will be removed of the study.  Remove all Ids where WEIGHT less or equal 65:
In that case, only the subjects with a weight over 65 will be kept for the study.
In any case, the interpreted filter data set will be displayed in the data tab.
Filters with several actions
In the previous examples, we only did one action. It is also possible to do several actions to define a filter. We have the possibility to define UNION and/or INTERSECTION of actions.
INTERSECTION
By clicking by the + and – button on the right, you can define an intersection of actions. For example, by clicking on the +, you can define a filter corresponding to intersection of
 The IDs that are different to 1.
 The lines with the time values less than 24.
Thus in that case, all the lines with a time less than 24 and corresponding to an ID different than 1 will be used in the study. If we look at the following data set as an example
Initial data set 
Resulting data set after action: select IDs ≠ 1  Considered data set for the study as the intersection of the two actions 
Resulting data set after action: select lines with time ≤ 24 
UNION
By clicking by the + and – button on the bottom, you can define an union of actions. For example, in a data set with a multi dose, I can focus on the first and the last dose. Thus, by clicking on the +, you can define a filter corresponding to union of
 The lines where the time is strictly less than 12.
 The lines where the time is greater than 72.
Initial data set 
Resulting data set after action: select lines where the time is strictly less than 12 
Considered data set for the study as the union of the three actions 
Resulting data set after action: select lines where the time is greater than 72 

Resulting data set after action: select lines where amt equals 40 
Notice that, if just define the first two actions, all the dose lines at a time in ]12, 72[ will also be removed. Thus, to keep having all the doses, we need to add the condition of selecting the lines where the dose is defined.
In addition, it is possible to do any combination of INTERSECTION and UNION.
Other filers: filter of filter and complementary filters
Filtering a data set can be nested. Based on the definition of a filter, by clicking on the filter, it is possible to create:
 A child: it corresponds to a new filter with the initial filter as the source data set.
 A complement: corresponds to the complement of the filter. For example, if you defined a filter with only the IDs where the SEX is F, then the complement corresponds to the IDs where the SEX is not F.
3.Non Compartmental Analysis
One of the main feature of PKanalix is the calculation of the parameters in the Non Compartmental Analysis framework.
NCA task
There is a dedicated task in the “Tasks” frame as in the following figure.
This task contains two different parts.
 The first one called “Run” corresponds to calculation button and the settings for the calculation and the acceptance criteria. The meaning of all the settings and their default is defined here.
 The second one allows to define the global settings impacting the calculation of \(\lambda_z\) (explained here) and the individual choice of measurements for the \(\lambda_z\) calculation (explained here) along with the vizualization of the regression line on each subject
NCA results
When computing of the NCA task is performed, it is possible to have the results in the “Results” frame. Two tables are proposed.
Non compartmental analysis results per individual
Individual estimates of the NCA parameters are displayed in the table in the tab “INDIV. ESTIM.” part as in the following figure
All the computed parameters depend on the type of observation and administration. All the parameters are described here. Notice that on all tables, there is an icon on the top right to copy the table in a word or an excel file (purple frame).
Summary statistics on NCA results
A summary table is also proposed in the tab “SUMMARY” as in the following figure. All the summary calculation is described here.
The summary table can be split and filtered according to the categorical and continuous covariates tagged in the data set and the acceptance criteria defined in the NCA settings. The values of the split covariates and criteria are displayed in the first columns of the summary table (blue highlight). The order of these columns corresponds to the order of the clicks to setup the splitting covariates (orange highlight). Continuous covariates can be discretized into groups by defining the group limits (yellow highlight), and categorical covariates categories can be grouped together.
It is currently not possible to split the table in several subtables (instead of splitting the rows), nor to choose the orientation of the table (NCA parameters as columns for instance).
Upon saving the PKanalix project, these results stratification settings are saved in the result folder and they will be reloaded when reloading a PKanalix project. The table in the <result folder>/PKanalix/IndividualParameters/nca/ncaIndividualParametersSummary.txt takes into account the split definition. This table is generated when clicking “run” in the task tab (usually without splits, as the result tab is not yet available to define them) and also upon saving the project (with splits if defined).
NCA plots
In the “Plots” frame, numerous plot associated to the individual parameters are displayed.
 Correlation between NCA parameters: The purpose of this plot is to display scatter plots for each pair of parameters. It allows to identify correlations between parameters, which can be used to see the results of your analysis and see the coherence of the parameters for each individuals.
 Distribution of the NCA parameters: The purpose of this plot is to see the empirical distribution of the parameters and thus have an idea of their distribution over the individuals.
 NCA parameters w.r.t. covariates: The purpose of this plot is to display the individual parameters as a function of the covariates. It allows to identify correlation effects between the individual parameters and the covariates.
 NCA individual fits: This plot shows the lambdaZ regression line for each individual.
NCA outputs
After running the NCA task, the following files are available in the result folder: <resultFolder>/PKanalix/IndividualParameters/nca
 Summary.txt contains the summary of the NCA parameters calculation, in a format easily readable by a human (but not easy to parse for a computer)
 ncaIndividualParametersSummary.txt contains the summary of the NCA parameters in a computerfriendly format.
 The first column corresponds to the name of the parameters
 The second column corresponds to the CDISC name of the parameters
 The other columns correspond to the several elements describing the summary of the parameters (as explained here)
 ncaIndividualParameters.txt contains the NCA parameters for each subjectoccasion along with the covariates.
 The first line corresponds to the name of the parameters
 The second line corresponds to the CDISC name of the parameters
 The other lines correspond to the value of the parameters
 pointsIncludedForLambdaZ.txt contains for each individual the concentration points used for the lambda_z calculation.
 id: individual identifiers
 occ: occasions (if present). The column header corresponds the data set header of the column(s) tagged as occasion(s).
 time: time of the measurements
 concentration: concentration measurements as displayed in the NCA individual fits plot (i.e after taking into the BLQ rules)
 BLQ: if the data point is a BLQ (1) or not (0)
 includedForLambdaZ: if this data point has been used to calculate the lambdaZ (1) or not (0)
The files ncaIndividualParametersSummary.txt and ncaIndividualParameters.txt can be exported in R for example using the following command
read.table("/path/to/file.txt", sep = ",", header = T)
Remarks
 To load the individual parameters using PKanalix name as headers, your just need to skip the second line
ncaParameters = read.table("/path/to/file.txt", sep = ",", header = T); ncaParameters[1,] # to remove the CDISC name line
 To load the individual parameters using CDISC as headers, your just need to skip the second line
ncaParameters = read.table("/path/to/file.txt", sep = ",", header = T, skip = 1)
 The separator is the one defined in the user preferences. We set “,” in this example as it is the one by default.
3.1.Check lambda_z
The \(\lambda_z\) is the slope of the terminal elimination phase. If estimated, the NCA parameters will be extrapolated to infinity. Check lambda_z tab visualizes the linear regression model for each individual and allows to control the points included in the calculation of the \(\lambda_z\) parameter.
General rule
The “Check lambda_z” tab shows the data of each individual plotted by default on yaxis logscale.
 Points used in the \(\lambda_z\) calculation are in blue.
 Points not used in the \(\lambda_z\) calculation are in grey.
 The \(\lambda_z\) curve is in green.
A main rule to select which points to include in the \(\lambda_z\) calculation applies to all individuals. You find it on top in the right panel – red frame in the figure above. There are four methods in a dropdown menu:
 “R2” chooses the number of points that optimize the correlation coefficient.
 “Adjusted R2” (called “Best Fit” in Winonlin) optimizes the adjusted correlation coefficient, taking into account the number of points. The “maximum number of points” and “minimum time” allow to further restrict the points selection.
 “Interval” selects all points within a given time interval.
 “Points” corresponds to the number of terminal points taken into account in the calculation.
The calculation of the \(\lambda_z\) is via the linear regression. The selection of the weighting used in the cost function is in the right panel (blue highlight). Three options are available: uniform (default), 1/Y, 1/Y^2.
See the Settings page and the Calculations page for more details. In addition to the main rule, it is possible to define more precisely (manually) which points to include or exclude for each individual.
Manual choice of measurements for the \(\lambda_z\) calculation
Charts in the check lambda_z tab are interactive. For each individual specific points can be selected manually to be included or excluded from the \(\lambda_z\) calculation. There are two methods to do it: define a range (time interval) with a slider or select each point individually.
Change of the range
For a given individual, the grey arrow below each individual chart defines a range – time interval when measured concentrations are used in the lambdaz calculation. New points to include are colored in light blue, and points to remove are colored in dark grey, as in the following figure:
The button APPLY INCLUDE/EXCLUDE in the top – right corner of the chart area applies the manual modifications of the points in the calculation of the \(\lambda_z\). If modifications concern several individuals, then clicking on the “Apply include/exclude” button updates all of them automatically.
The plots are updated with new points and new \(\lambda_z\) curve. For modified plots, the “return arrow” appears in the top – left corner, red frame in the figure below. It resets the individual calculation to the general rule. The dropdown menu in the “Apply include/exclude” (blue frame) allows to reset all individuals to the general rule. Clear selection the current selection made before clicking on the “Apply include/exclude” button.
Include/exclude single points
It is possible to manually modify each point to include it in the calculation of \(\lambda_z\) or to remove it from the calculation:
 If you click on an unused point (light grey), it will change its color into light blue and will be selected for inclusion (figure on the left below).
 If you click on a used point (blue), it will change its color into dark grey and will be selected for exclusion (figure on the right below).
Clicking on the “Apply include/exclude” button applies these changes. The “return arrow” in the top – left corner of each individual plot resets the individual selection to the general rule one by one, while the “Reset to general rule” button in the “Apply include/exclude” dropdown menu resets all of them at once.
Remarks
 If the general rule is modified, only the individuals for which the general rule applies will be updated. Individual for which the points have been manually modified will not be updated.
 If the \(\lambda_z\) can not be computed, the background of the plot will be in red as in the following figure.
3.2.Data processing and calculation rules
This page presents the rules applied to preprocess the data set and the calculation rules applied for the NCA analysis.
Data processing
Ignored data
All observation points occurring before the last dose recorded for each individual are excluded. Observation points occurring at the same time as the last dose are kept, irrespective of their position in the data set file.
Note that for plasma data, negative or zero concentrations are not excluded.
Forbidden situations
For plasma data, mandatory columns are ID, TIME, OBSERVATION, and AMOUNT. For urine data, mandatory columns are ID, TIME, OBSERVATION, AMOUNT and one REGRESSOR (to define the volume).
Two observations at the same time point will generate an error.
For urine data, negative or null volumes and negative observations generate an error.
Additional points
For plasma data, if an individual has no observation at dose time, a value is added:
 Extravascular and Infusion data: For single dose data, a concentration of zero. For steadystate, the minimum value observed during the dosing interval.
 IV Bolus data: the concentration at dose time (C0) is extrapolated using a loglinear regression (i.e log(concentration) versus time) with uniform weight of first two data points. In the following cases, C0 is taken to be the first observed measurement instead (can be zero or negative):
 one of the two observations is zero
 the regression yields a slope >= 0
BLQ data
Measurements marked as BLQ data with a “1” in the CENSORING column will be replaced by zero, the LOQ value or the LOQ value divided by 2, or considered as missing (i.e excluded) depending on the setting chosen. They are then handled as any other measurement. The LOQ value is indicated in the OBSERVATION column of the data set.
Steadystate
Steadystate is indicated using the STEADYSTATE and INTERDOSE INTERVAL columntypes. Equal dosing intervals are assumed. Observation points occurring after the dose time + interdose interval are excluded for Cmin and Cmax, but not for lambda_z. Dedicated parameters are computed such as the AUC in the interdose interval, and some specific formula should be considered for the clearance and the volume for example. More details can be found here.
Urine
Urine data is assumed to be singledose, irrespective of the presence of a STEADYSTATE column. For the NCA analysis, the data is not used directly. Instead the intervals midpoints and the excretion rate for each interval (amount eliminated per unit of time) are calculated and used:
\( \textrm{midpoint} = \frac{\textrm{start time } + \textrm{ end time}}{2} \)
\( \textrm{excretion rate} = \frac{\textrm{concentration } \times \textrm{ volume}}{\textrm{end time } – \textrm{ start time}} \)
Calculation rules
Lambda_z
PKanalix tries to estimate the slope of the terminal elimination phase, called \( \lambda_z \), as well as the intercept called Lambda_z_intercept. \( \lambda_z \) is calculated via a linear regression between Y=log(concentrations) and the X=time. Several weightings are available for the regression: uniform, \( 1/Y\) and \(1/Y^2\).
Zero and negative concentrations are excluded from the regression (but not from the NCA parameter calculations). The number of points included in the linear regression can be chosen via the “Main rule” setting. In addition, the user can define specific points to include or exclude for each individual (see Check lambda_z page for details). When one of the automatic “main rules” is used, points prior to Cmax, and the point at Cmax for nonbolus models are never included. Those points can however be included manually by the user. If \( \lambda_z \) can be estimated, NCA parameters will be extrapolated to infinity.
R2 rule: the regression is done with the three last points, then four last points, then five last points, etc. If the R2 for n points is larger than or equal to the R2 for (n1) points – 0.0001, then the R2 value for n points is used. Additional constrains of the measurements included in the \( \lambda_z \) calculation can be set using the “maximum number of points” and “minimum time” settings. If strictly less than 3 points are available for the regression or if the calculated slope is positive, the \( \lambda_z \) calculation fails.
Adjusted R2 rule: the regression is done with the three last points, then four last points, then five last points, etc. For each regression the adjusted R2 is calculated as:
\( \textrm{Adjusted R2} = 1 – \frac{(1R^2)\times (n1)}{(n2)} \)
with (n) the number of data points included and (R^2) the square of the correlation coefficient.
If the adjusted R2 for n points is larger than or equal to the adjusted R2 for (n1) points – 0.0001, then the adjusted R2 value for n points is used. Additional constrains of the measurements included in the \( \lambda_z \) calculation can be set using the “maximum number of points” and “minimum time” settings. If strictly less than 3 points are available for the regression or if the calculated slope is positive, the \( \lambda_z \) calculation fails.
Interval: strictly positive concentrations within the given time interval are used to calculate \( \lambda_z \). Points on the interval bounds are included. Semiopen intervals can be defined using +/ infinity.
Points: the n last points are used to calculate \( \lambda_z \). Negative and zero concentrations are excluded after the selection of the n last points. As a consequence, some individuals may have less than n points used.
AUC calculation
The following linear and logarithmic rule apply to calculate the AUC and AUMC over an interval [t1, t2] where the measured concentrations are C1 and C2. The total AUC is the sum of the AUC calculated on each interval. If the logarithmic AUC rule fails in an interval because C1 or C2 are null or negative, then the linear interpolation rule will apply for that interval.
Linear formula:
\( AUC _{t_1}^{t_2} = (t_2t_1) \times \frac{C_1+C_2}{2} \)
\( AUMC _{t_1}^{t_2} = (t_2t_1) \times \frac{t_1 \times C_1+ t_2 \times C_2}{2} \)
Logarithmic formula:
\( AUC _{t_1}^{t_2} = (t_2t_1) \times \frac{C_2 – C_1}{\ln(\frac{C_2}{C_1})} \)
\( AUMC _{t_1}^{t_2} = (t_2t_1) \times \frac{t_2 \times C_2 – t_1 \times C_1}{\ln(\frac{C_2}{C_1})} – (t_2t_1)^2 \times \frac{C_2 – C_1}{\ln(\frac{C_2}{C_1})^2} \)
Interpolation formula for partial AUC
When a partial AUC is requested at time points not included is the original data set, it is necessary to add an additional measurement point. Those additional time points can be before or after the last observed data point.
Note that the partial AUC is not computed if a bound of the interval falls before the dosing time.
Additional point before last observed data point
Depending on the choice of the “Integral method” setting, this can be done using a linear or log formula to find the added concentration C* at requested time t*, given that the previous and following measurements are C1 at t1 and C2 at t2.
Linear interpolation formula:
\( C^* = C_1 + \left \frac{t^*t_1}{t_2t_1} \right \times (C_2C_1) \)
Logarithmic interpolation formula:
\( C^* = \exp \left( \ln(C_1) + \left \frac{t^*t_1}{t_2t_1} \right \times (\ln(C_2)\ln(C_1)) \right) \)
If the logarithmic interpolation rule fails in an interval because C1 or C2 are null or negative, then the linear interpolation rule will apply for that interval.
Additional point after last observed data point
If \( \lambda_z \) is not estimable, the partial area will not be calculated. Otherwise, \( \lambda_z \) is used to calculate the additional concentration C*:
\( C^* = \exp(\textrm{Lambda_z_intercept} – \lambda_z \times t) \)
3.3.NCA parameters
The following page describes all the parameters computed by the non compartmental analysis. Parameter names are fixed and cannot be changed.
 Parameters related to \(\lambda_z\)
 Parameters related to plasma/blood measurements
 Parameters related to plasma/blood measurements specific to steady state dosing regimen
 Parameters related to urine measurements
Parameters related to \(\lambda_z\)
Name  PKPARMCD CDISC  PKPARM CDISC  UNITS  DESCRIPTION 

Rsq  R2  R Squared  no unit  Goodness of fit statistic for the terminal (loglinear) phase between the linear regression and the data 
Rsq_adjusted  R2ADJ  R Squared Adjusted  no unit  Goodness of fit statistic for the terminal elimination phase, adjusted for the number of points used in the estimation of \(\lambda_z\) 
Corr_XY  CORRXY  Correlation Between TimeX and Log ConcY  no unit  Correlation between time (X) and log concentration (Y) for the points used in the estimation of \(\lambda_z\) 
No_points_lambda_z  LAMZNPT  Number of Points for Lambda z  no unit  Number of points considered in the \(\lambda_z\) regression 
Lambda_z  LAMZ  Lambda z  1/time  First order rate constant associated with the terminal (loglinear) portion of the curve. Estimated by linear regression of time vs. log concentration 
Lambda_z_lower  LAMZLL  Lambda z Lower Limit  time  Lower limit on time for values to be included in the \(\lambda_z\) calculation 
Lambda_z_upper  LAMZUL  Lambda z Upper Limit  time  Upperlimit on time for values to be included in the \(\lambda_z\) calculation 
HL_Lambda_z  LAMZHL  HalfLife Lambda z  time  Terminal halflife = ln(2)/Lambda_z 
Lambda_z_intercept  –  –  no unit  Intercept found during the regression for (\lambda_z\), i.e. value of the regression (in logscale) at time 0, i.e. the regression writes log(Concentration) = Lambda_z*t+Lambda_z_intercept 
Span  –  –  no unit  Ratio between the sampling interval of the measurements used for the \(\lambda_z\) and the terminal halflife = (Lambda_z_upper – Lambda_z_lower)*Lambda_z/ln(2) 
Parameters related to plasma/blood measurements
Name  PKPARMCD CDISC  PKPARM CDISC  UNITS  DESCRIPTION 

Tlag  TLAG  Time Until First Nonzero Conc  time  Tlag is the time prior to the first measurable (nonzero) concentration. Tlag is 0 if the first observation after the last dose is not 0 or LOQ. The value is set to 0 for non extravascular input. 
T0  time  Time of the dose  
Dose  amount  Amount of the dose  
N_Samples  –  –  no unit  Number of samples in the individuals. 
C0  C0  Initial Conc  amount/volume  If a PK profile does not contain an observation at dose time (C0), the following value is added Extravascular and Infusion data. For single dose data, a concentration of zero. For steadystate, the minimum observed during the dose interval. IV Bolus data. Loglinear regression of first two data points to backextrapolate C0. 
Tmax  TMAX  Time of CMAX  time  Time of maximum observed concentration. – For nonsteadystate data, the entire curve is considered. – For steadystate data, Tmax corresponds to points collected during a dosing interval. If the maximum observed concentration is not unique, then the first maximum is used. 
Cmax  CMAX  Max Conc  amount/volume  Maximum observed concentration, occurring at Tmax. If not unique, then the first maximum is used. 
Cmax_D  CMAXD  Max Conc Norm by Dose  1/volume  Maximum observed concentration divided by dose. Cmax_D = Cmax/Dose 
Tlast  TLST  Time of Last Nonzero Conc  time  Last time point with measurable concentration 
Clast  CLST  Last Nonzero Conc  amount/volume  Concentration of last time point with measurable concentration 
AUClast  AUCLST  AUC to Last Nonzero Conc  time.amount/volume  Area under the curve from the time of dosing to the last measurable positive concentration. The calculation depends on the Integral method setting. 
AUClast_D  AUCLSTD  AUC to Last Nonzero Conc Norm by Dose  time/volume  Area under the curve from the time of dosing to the last measurable concentration divided by the dose. AUClast_D = AUClast/Dose 
AUMClast  AUMCLST  AUMC to Last Nonzero Conc  time2.amount/volume  Area under the moment curve (area under a plot of the product of concentration and time versus time) from the time of dosing to the last measurable concentration. 
MRTlast  MRTIVLST  MRT Intravasc to Last Nonzero Conc  time  [if intravascular] Mean residence time from the time of dosing to the time of the last measurable concentration, for a substance administered by intravascular dosing. MRTlast_iv = AUMClast/AUClast – TI/2, where TI represents infusion duration. 
MRTlast  MRTEVLST  MRT Extravasc to Last Nonzero Conc  time  [if extravascular] Mean residence time from the time of dosing to the time of the last measurable concentration for a substance administered by extravascular dosing. MRTlast_ev = AUMClast/AUClast – TI/2, where TI represents infusion duration. 
AUCall  AUCALL  AUC All  time.amount/volume  Area under the curve from the time of dosing to the time of the last observation. If the last concentration is positive AUClast=AUCall. Otherwise, AUCall will not be equal to AUClast as it includes the additional area from the last measurable concentration down to zero or negative observations. 
AUCINF_obs  AUCIFO  AUC Infinity Obs  time.amount/volume  AUC from Dosing_time extrapolated to infinity, based on the last observed concentration. AUCINF_obs = AUClast + Clast/Lambda_z 
AUCINF_D_obs  AUCIFOD  AUC Infinity Obs Norm by Dose  time/volume  AUCINF_obs divided by dose AUCINF_D_obs = AUCINF_obs/Dose 
AUC_PerCentExtrap_obs  AUCPEO  AUC %Extrapolation Obs  %  Percentage of AUCINF_obs due to extrapolation from Tlast to infinity. AUC_%Extrap_obs = 100*(1 AUClast / AUCINF_obs) 
AUC_PerCentBack_Ext_obs  AUCPBEO  AUC %Back Extrapolation Obs  %  Applies only for intravascular bolus dosing. the percentage of AUCINF_obs that was due to back extrapolation to estimate C(0). 
AUMCINF_obs  AUMCIFO  AUMC Infinity Obs  time2.amount/volume  Area under the first moment curve to infinity using observed Clast AUMCINF_obs = AUMClast + (Clast/Lambda_z)*(Tlast + 1.0/Lambda_z) 
AUMC_PerCentExtrap_obs  AUMCPEO  AUMC % Extrapolation Obs  %  Extrapolated (% or total) area under the first moment curve to infinity using observed Clast AUMC_%Extrap_obs = 100*(1 AUMClast / AUMCINF_obs) 
MRTINF_obs  MRTIVIFO  MRT Intravasc Infinity Obs  time  [if intravascular] Mean Residence Time extrapolated to infinity for a substance administered by intravascular dosing using observed Clast MRTINF_obs_iv = AUMCINF_obs/AUCINF_obs TI/2, where TI represents infusion duration. 
MRTINF_obs  MRTEVIFO  MRT Extravasc Infinity Obs  time  [if extravascular] Mean Residence Time extrapolated to infinity for a substance administered by extravascular dosing using observed Clast MRTINF_obs_ev = AUMCINF_obs/AUCINF_obs 
Vz_F_obs  VZFO  Vz Obs by F  volume  [if extravascular] Volume of distribution associated with the terminal phase divided by F (bioavailability) Vz_F_obs = Dose/Lambda_z/AUCINF_obs 
Cl_F_obs  CLFO  Total CL Obs by F  volume/time  [if extravascular] Clearance over F (based on observed Clast) Cl_F_obs = Dose/AUCINF_obs 
Vz_obs  VZO  Vz Obs  volume  [if intravascular] Volume of distribution associated with the terminal phase Vz_obs= Dose/Lambda_z/AUCINF_obs 
Cl_obs  CLO  Total CL Obs  volume/time  [if intravascular] Clearance (based on observed Clast) Cl_obs = Dose/AUCINF_obs 
Vss_obs  VSSO  Vol Dist Steady State Obs  volume  [if intravascular] An estimate of the volume of distribution at steady state based last observed concentration. Vss_obs = MRTINF_obs*Cl_obs 
Clast_pred  –  –  amount/volume  Clast_pred = exp(Lambda_z_intercept Lambda_z* Tlast) The values alpha (corresponding to the yintercept obtained when calculating \(\lambda_z\)) and lambda_z are those values found during the regression for \(\lambda_z\) 
AUCINF_pred  AUCIFP  AUC Infinity Pred  time.amount/volume  Area under the curve from the dose time extrapolated to infinity, based on the last predicted concentration, i.e., concentration at the final observation time estimated using the linear regression performed to estimate \(\lambda_z\) . AUCINF_pred = AUClast + Clast_pred/Lambda_z 
AUCINF_D_pred  AUCIFPD  AUC Infinity Pred Norm by Dose  time/volume  AUCINF_pred divided by dose = AUCINF_pred/Dose 
AUC_PerCentExtrap_pred  AUCPEP  AUC %Extrapolation Pred  %  Percentage of AUCINF_pred due to extrapolation from Tlast to infinity AUC_%Extrap_pred = 100*(1 AUClast / AUCINF_pred) 
AUC_PerCentBack_Ext_pred  AUCPBEP  AUC %Back Extrapolation Pred  %  Applies only for intravascular bolus dosing. The percentage of AUCINF_pred that was due to back extrapolation to estimate C(0). 
AUMCINF_pred  AUMCIFP  AUMC Infinity Pred  time2.amount/volume  Area under the first moment curve to infinity using predicted Clast AUMCINF_pred = AUMClast + (Clast_pred/Lambda_z)*(Tlast+1/Lambda_z) 
AUMC_PerCentExtrap_pred  AUMCPEP  AUMC % Extrapolation Pred  %  Extrapolated (% or total) area under the first moment curve to infinity using predicted Clast AUMC_%Extrap_pred = 100*(1 AUMClast / AUMCINF_pred) 
MRTINF_pred  MRTIVIFP  MRT Intravasc Infinity Pred  time  [if intravascular] Mean Residence Time extrapolated to infinity for a substance administered by intravascular dosing using predicted Clast 
MRTINF_pred  MRTEVIFP  MRT Extravasc Infinity Pred  time  [if extravascular] Mean Residence Time extrapolated to infinity for a substance administered by extravascular dosing using predicted Clast 
Vz_F_pred  VZFP  Vz Pred by F  volume  [if extravascular] Volume of distribution associated with the terminal phase divided by F (bioavailability) = Dose/Lambda_z/AUCINF_pred 
Cl_F_pred  CLFP  Total CL Pred by F  volume/time  [if extravascular] Clearance over F (using predicted Clast) Cl_F_pred = Dose/AUCINF_pred 
Vz_pred  VZP  Vz Pred  volume  [if intravascular] Volume of distribution associated with the terminal phas Vz_pred = Dose/Lambda_z/AUCINF_pred 
Cl_pred  CLP  Total CL Pred  volume/time  [if intravascular] Clearance (using predicted Clast) = Dose/AUCINF_pred 
Vss_pred  VSSP  Vol Dist Steady State Pred  volume  [if intravascular] An estimate of the volume of distribution at steady state based on the last predicted concentration. Vss_pred = MRTINF_pred*Cl_pred 
AUC_lower_upper  AUCINT  AUC from T1 to T2  time.amount/volume  AUC from T1 to T2 (partial AUC) 
AUC_lower_upper_D  AUCINTD  AUC from T1 to T2 Norm by Dose  time/volume  AUC from T1 to T2 (partial AUC) divided by Dose 
CAVG_lower_upper  CAVGINT  Average Conc from T1 to T2  amount/volume  Average concentration from T1 to T2 
Parameters related to plasma/blood measurements specific to steady state dosing regimen
In the case of repeated doses, dedicated parameters are used to define the steady state parameters and some specific formula should be considered for the clearance and the volume for example. Notice that all the calculation dedicated to the area under the first moment curve (AUMC) are not relevant.
Name  PKPARMCD CDISC  PKPARM CDISC  UNITS  DESCRIPTION 

Tau  –  time  The (assumed equal) dosing interval for steadystate data.  
Ctau  CTAU  Conc Trough  amount/volume  Concentration at end of dosing interval. If the observed concentration does not exist, the value is interpolated. It it cannot be interpolated, it is extrapolated using lambda_z. If lambda_z has not been computed, it is extrapolated as the last observed value. 
Ctrough  CTROUGH  Conc Trough  amount/volume  Concentration at end of dosing interval. If the observed concentration does not exist, the value is NaN. 
AUC_TAU  AUCTAU  AUC Over Dosing Interval  time.amount/volume  The area under the curve (AUC) for the defined interval between doses (TAU). The calculation depends on the Integral method setting. 
AUC_TAU_D  AUCTAUD  AUC Over Dosing Interval Norm by Dose  time/volume  The area under the curve (AUC) for the defined interval between doses (TAU) divided by the dose. AUC_TAU_D = AUC_TAU/Dose 
AUC_TAU_PerCentExtrap  –  –  % 
Percentage of AUC due to extrapolation in steady state.
AUC_TAU_%Extrap = 100*(AUC [Tlast, tau] if Tlast<=tau)/AUC_TAU;

AUMC_TAU  AUMCTAU  AUMC Over Dosing Interval  time2.amount/volume  The area under the first moment curve (AUMC) for the defined interval between doses (TAU). 
Vz_F  VZFTAU  Vz for Dose Int by F  volume  [if extravascular] The volume of distribution associated with the terminal slope following extravascular administration divided by the fraction of dose absorbed, calculated using AUC_TAU. Vz_F = Dose/Lambda_z/AUC_TAU 
Vz  VZTAU  Vz for Dose Int  volume  [if intravascular] The volume of distribution associated with the terminal slope following intravascular administration, calculated using AUC_TAU. Vz = Dose/Lambda_z/AUC_TAU 
CLss_F  CLFTAU  Total CL by F for Dose Int  volume/time  [if extravascular] The total body clearance for extravascular administration divided by the fraction of dose absorbed, calculated using AUC_TAU . CLss_F = Dose/AUC_TAU 
CLss  CLTAU  Total CL for Dose Int  volume/time  [if intravascular] The total body clearance for intravascular administration, calculated using AUC_TAU. CLss = Dose/AUC_TAU 
Cavg  CAVG  Average Concentration  amount/volume  AUCTAU divided by Tau. Cavg = AUC_TAU /Tau 
FluctuationPerCent  FLUCP  Fluctuation%  %  The difference between Cmin and Cmax standardized to Cavg, between dose time and Tau. Fluctuation% = 100.0* (Cmax Cmin)/Cavg 
FluctuationPerCent_Tau  –  –  %  The difference between Ctau and Cmax standardized to Cavg, between dose time and Tau. Fluctuation% _Tau = 100.0* (Cmax Ctau)/Cavg 
Accumulation_Index  AILAMZ  Accumulation Index using Lambda z  no unit  Theoretical accumulation ratio: Predicted accumulation ratio for area under the curve (AUC) calculated using the Lambda z estimated from single dose data. Accumulation_Index = 1.0/(1.0 exp(Lambda_z*Tau)) 
Swing  –  –  no unit  The degree of fluctuation over one dosing interval at steady state Swing = (Cmax Cmin)/Cmin 
Swing_Tau  –  –  no unit  Swing_Tau = (Cmax Ctau)/Ctau 
Tmin  TMIN  Time of CMIN observation.  time  Time of minimum concentration sampled during a dosing interval. 
Cmin  CMIN  Min Conc  amount/volume  Minimum observed concentration between dose time and dose time + Tau. 
Cmax  CMAX  Max Conc  amount/volume  Maximum observed concentration between dose time and dose time + Tau. 
MRTINF_obs  MRTIVIFO or MRTEVIFO  MRT Intravasc Infinity Obs or MRT Extravasc Infinity Obs  time  Mean Residence Time extrapolated to infinity using predicted Clast, calculated using AUC_TAU. 
Parameters related to urine measurements
Name  PKPARMCD CDISC  PKPARM CDISC  UNITS  DESCRIPTION 

T0  time  Time of the last administered dose (assumed to be zero unless otherwise specified). 

Dose  –  –  amount  Amount of the dose 
N_Samples  –  –  no unit  Number of samples in the individuals. 
Tlag  TLAG  Time Until First Nonzero Conc  time  Midpoint prior to the first measurable (nonzero) rate. 
Tmax_Rate  ERTMAX  Midpoint of Interval of Maximum ER  time  Midpoint of collection interval associated with the maximum observed excretion rate. 
Max_Rate  ERMAX  Max Excretion Rate  amount/time  Maximum observed excretion rate. 
Mid_Pt_last  ERTLST  Midpoint of Interval of Last Nonzero ER  time  Midpoint of collection interval associated with Rate_last. 
Rate_last  ERLST  Last Meas Excretion Rate  amount/time  Last measurable (positive) rate. 
AURC_last  AURCLST  AURC to Last Nonzero Rate  amount  Area under the urinary excretion rate curve from time 0 to the last measurable rate. 
AURC_last_D  AURCLSTD  AURC to Last Nonzero Rate Norm by Dose  no unit  The area under the urinary excretion rate curve (AURC) from time zero to the last measurable rate, divided by the dose. 
Vol_UR  VOLPK  Sum of Urine Vol  volume  Sum of Urine Volumes 
Amount_Recovered  amount  Cumulative amount eliminated.  
Percent_Recovered  %  100*Amount_Recovered/Dose  
AURC_all  AURCALL  AURC All  amount  Area under the urinary excretion rate curve from time 0 to the last rate. This equals AURC_last if the last rate is measurable. 
AURC_INF_obs  AURCIFO  AURC Infinity Obs  amount  Area under the urinary excretion rate curve extrapolated to infinity, based on the last observed excretion rate. 
AURC_PerCentExtrap_obs  AURCPEO  AURC % Extrapolation Obs  %  Percent of AURC_INF_obs that is extrapolated 
AURC_INF_pred  AURCIFP  AURC Infinity Pred  amount  Area under the urinary excretion rate curve extrapolated to infinity, based on the last predicted excretion rate. 
AURC_PerCentExtrap_pred  AURCPEP  AURC % Extrapolation Pred  %  Percent of AURC_INF_pred that is extrapolated 
AURC_lower_upper  AURCINT  AURC from T1 to T2 (partial AUC)  amount  The area under the urinary excretion rate curve (AURC) over the interval from T1 to T2. 
AURC_lower_upper_D  AURCINTD  AURC from T1 to T2 Norm by Dose  no unit  The area under the urinary excretion rate curve (AURC) over the interval from T1 to T2 divided by Dose 
Rate_last_pred  –  –  amount/time  The values alpha and Lambda_z are those values found during the regression for lambda_z 
3.4.NCA settings
The following page describes all the settings for the parameters calculations.
 Calculations settings
 \(\lambda_z\) calculation settings
 Parameters to compute
 Acceptance criteria settings
 Global: Obs id to use
Calculations
These settings corresponds to the settings impacting the calculation of the NCA parameters.
 Administration type: Intravenous or Extravascular. This defines the drug type of administration. IV bolus and IV infusions must be set as “intravenous”.
 Integral method: Method for AUC and AUMC calculation and interpolation. Trapezoidal refers to formula to calculate the AUC and Interpolation refers to the formula to add an additional point in case of partial AUC with a time point not originally present in the data set. See the Calculation rules for details on the formulas.
 Linear Trapezoidal Linear (Linear trapezoidal, linear interpolation): For AUC calculation linear formula. For interpolation linear formula.
 Linear Log Trapezoidal (Linear/log trapezoidal, linear/log interpolation): For AUC, linear before Cmax and log after Cmax. For interpolation, linear before Cmax and log after.
 Linear Up Log Down (LinUpLogDown trapezoidal, LinUpLogDown interpolation): For AUC, linear if concentration is going up or is stable, log if going down. For interpolation, linear if concentration is going up or is stable, log if going down.
 Linear Trapezoidal linear/log (Linear trapezoidal, linear/log interpolation): For AUC, linear formula. For interpolation, linear before Cmax, log after Cmax. If several Cmax, the first one is used.
For all methods, if an observation value is less than or equal to zero, the program defaults to the linear trapezoidal or interpolation rule for that point. Similarly, if adjacent observations values are equal to each other, the program defaults to the linear trapezoidal or interpolation rule.
 Partial AUC time: Define if the user would like to compute a partial AUC in a specific time interval. It is possible to define several partial AUC intervals, applicable to all individuals.
 BLQ before Tmax: In case of BLQ measurements, this corresponds to the method by which the BLQ data before Tmax should be replaced. Possible methods are “missing”, “LOQ”, “LOQ/2” or “0” (default value is 0).
 BLQ after Tmax: In case of BLQ measurements, this corresponds to the method by which the BLQ data after Tmax should be replaced. Possible methods are “missing”, “LOQ”, “LOQ/2” or “0” (default value is LOQ/2).
Settings impacting the \(\lambda_z\) calculation
This settings are impacting the calculation of \(\lambda_z\). See also the Check lambda_z page.
 Main rule for the \(\lambda_z\) estimation. It corresponds to the rule to define the measurements used for the \(\lambda_Z\) calculation. Possible rules are “R2”, “interval”, “points” or “adjustedR2” (called Best Fit in Winonlin) (default value is “adjustedR2”). See the calculation rules for more details.
 In case of “R2” or “adjustedR2” rule, the user has the possibility to define the maximum number of points and/or minimum time for the \(\lambda_z\) estimation. It allows to constrain the points tested for inclusion.
 In case of “Interval” rule, the user has the possibility to define the time interval to consider.
 In case of “Points” rule, the user has the possibility to define the number of points to consider.
 Weighting method used for the regression that estimates \(\lambda_z\). Possible methods are “Y”, “Y2” or “uniform” (default value is “uniform”).
Selecting NCA parameters to compute
The NCA parameters to compute can be selected from the “Parameter” column to be placed in the “Computed parameters” column, either by clicking on one or several parameters and using the arrows, or by double clicking on the parameters.
It is recommended to always compute Rsq_adjusted, No_points_lambda_z, Lambda_z and Lambda_z_intercept as these parameters are necessary to display the terminal slope and information box in the NCA fit plot.
The list of parameters computed by default when starting a new PKanalix project can be defined in the Settings > Preferences.
Acceptance criteria
These settings corresponds to the settings impacting the acceptance criteria. When acceptance criteria are defined, flags indicating if the condition is met or not are added to the output result table. Statistics in the summary table can be calculated using only individuals who satisfy one or several acceptance criteria. The filtering option is available directly in the Summary subtab of the NCA results.
 Adjusted R2: It corresponds to the threshold of the adjusted R2 for the estimation of \(\lambda_z\). If activated, it will fill the value of Flag_Rsq_adjusted: if Rsq_adjusted > threshold, then Flag_Rsq_adjusted=1. The default value is .98.
 % extrapolated AUC: It corresponds to the threshold of the percentage of the total predicted AUC (or AURC for urine models) due to the extrapolation to infinity after the last measurement. If activated, it will fill the value of Flag_AUC_%Extrap_pred: if AUC_%Extrap_pred < threshold, then Flag_AUC_%Extrap_pred=1. The default value is 20%.
 Span: It corresponds to the threshold of the span. The span corresponds to the ratio of the sampling interval length for the \(\lambda_z\) calculation and the halflife. If activated, it will fill the value of Flag_Span: if Span > threshold, then Flag_Span=1. The default value is 3.
Global
 Obs. ID to use: when a column OBSERVATION ID has defined several types of measurements in the data set, this setting permit to choose which one to use for the NCA analysis
3.5.Parameters summary
This pages provides the information on the statistics used for the summary of the individual parameters for both NCA tasks and CA tasks. All the statistics are made with respect to the NOBS individuals where the parameter has value. For example, in the NCA task, the parameter \(\lambda_z\) might not be possible to compute. Thus, the value associated to these subjects will be missing and not taken into account in the statistics.
 MIN: Minimum of the parameter over the NOBS individuals
 Q1: First quartile of the parameter over the NOBS individuals
 MEDIAN: Median of the parameter over the NOBS individuals
 Q3: Third quartile of the parameter over the NOBS individuals
 MAX: Maximum of the parameter over the NOBS individuals
 MEAN: Mean of the parameter over the NOBS individuals
 SD: Standard deviation of the parameter over the NOBS individuals
 SE: Standard error of the parameter over the NOBS individuals
 CV: Coefficient of variation of the parameter over the NOBS individuals
 NTOT: Total number of individuals
 NOBS: Number of individuals with a valid value
 NMISS: Number of individuals with no valid value
 GEOMEAN: Geometric mean of the parameter over the NOBS individuals
 GEOSD: Geometric standard deviation of the parameter over the NOBS individuals
3.6.NCA parameters with respects to covariate
Purpose
The figure displays the individual parameters as a function of the covariates. It allows to identify correlation effects between the individual parameters and the covariates.
Identifying correlation effects
In the example below, we can see the parameters Cl_obs and Cmax with respect to the covariates: the weight WT, the age AGE and the sex category.
Visual guidelines
In order to help identifying correlations, regression lines, spline interpolations and correlation coefficients can be overlaid on the plots for continuous covariates. Here we can see a strong correlation between the parameter Cl_obs and the age and the weight.
Overlay of values on top of the boxplot
The NCA parameter values used to generate the boxplots for the categorical covariates can be overlayed on top of the boxplots, which enables to better visualize the distribution in case of a small number of individuals.
Highlight
Hovering on a point reveals the corresponding individual and, if multiple individual parameters have been simulated from the conditional distribution for each individual, highlights all the points points from the same individual. This is useful to identify possible outliers and subsequently check their behavior in the observed data.
Selection
It is possible to select a subset of covariates or parameters, as shown below. In the selection panel, tick the items you would like to see and click “Accept”. This is useful when there are many parameters or covariates.
Stratification
Stratification can be applied by creating groups of covariate values. As can be seen below, these groups can then be split, colored or filtered, allowing to check the effect of the covariate on the correlation between two parameters. The correlation coefficient is updated according to the split or filtering.
Settings
 General
 Legend: add/remove the legend. There is only one legend for all plots.
 Grid: add/remove the grid.
 Information: display/hide the correlation coefficient associated with each scatter plot.
 Units: display/hide the units of the NCA parameters on the y axis
 Display
 Individual parameters: Selects the NCA parameters to display as subplot on rows. Tick the items you would like to see and click “accept”.
 Covariates: Selects the NCA parameters to display as subplot on columns. Tick the items you would like to see and click “accept”.
 Boxplots data: overlay the NCA parameters as dots on top of the boxplots for categorical covariates, either aligned on a vertical line (“aligned”), spread over the boxplot width (“spread”) or not at all (“none”).
 Regression line: Add/remove the linear regression line corresponding to the correlation coefficient.
 Spline: Add/remove the spline. Spline settings cannot be changed.
3.7.Distribution of the NCA parameters
Purpose
This figure can be used to see the empirical distribution of the NCA parameters. Further analysis such as stratification by covariate can be performed and will be detailed below.
PDF and CDF
It is possible to display the theoretical distribution and the histogram of the empirical distribution as proposed below.
The distributions are represented as histograms for the probability density function (PDF). Hovering on the histogram also reveals the density value of each bin as shown on the figure below
Cumulative distribution functions (CDF) is proposed too.
Example of stratification
It is possible to stratify the population by some covariate values and obtain the distributions of the individual parameters in each group. This can be useful to check covariate effect, in particular when the distribution of a parameter exhibits two or more peaks for the whole population. On the following example, the distribution of the parameter k from the same example as above has been split for two groups of individuals according to the value of the sex, allowing to visualize two clearly different distributions.
Settings
 General: add/remove the legend, and the grid
 Display
 Distribution function: The user can choose to display either the probability density function (PDF) as histogram or the cumulative distribution function (CDF).
3.8.Correlation between NCA parameters
Purpose
This plot displays scatter plots for each pair of parameters. It allows to identify correlations between parameters, which can be used to see the results of your analysis and see the coherence of the parameters for each individuals.
Example
In the following example, one can see pairs of parameters estimated for all parameters.
Visual guidelines
In addition to regression lines, correlation coefficients can been added to see the correlation between random effects, as well as spline interpolations.
Selection
It is possible to select a subset of parameters, whose pairs of correlations are then displayed, as shown below. In the selection panel, a set of contiguous rows can be selected with a single extended click, or a set of noncontiguous rows can be selected with several clicks while holding the Ctrl key.
Highlight
Similarly to other plots, hovering on a point provides information on the corresponding subject id, and highlights other points corresponding to the same individual.
Stratification: coloring and filtering
Stratification can be applied by creating groups of covariate values. As can be seen below, these groups can then be split, colored and/or filtered allowing to check the effect of the covariate on the correlation between two parameters. The correlation coefficient is updated according to the stratifying action. In the following case, We split by the covariate SEX and color bay 2 categories of AGE.
Settings
 General
 Legend and grid : add/remove the legend or the grid. There is only one legend for all plots.
 Information: display/hide the correlation coefficient associated with each scatter plot.
 Display
 Selection. The user can select some of the parameters to display only the corresponding scatter plots. A simple click selects one parameter, whereas multiple clicks while holding the Ctrl key selects a set of parameters.
 Visual cues. Add/remove the regression line or the spline interpolation.
4.Compartmental Analysis
One of the main features of PKanalix is the calculation of the parameters in the Compartmental Analysis framework. It consists in finding parameters of a model representing the PK as the dynamics in compartments for each individual. It uses the NelderMead algorithm.
CA task
There is a dedicated task in the “Tasks” frame as in the following figure.
This task contains two different parts.
 The first one called “Run” corresponds to calculation button and the settings for the model and the calculation. The meaning of all the settings and their default is defined here.
 The second one allows to visualize the predictions obtained with the initial values are displayed for each individual together with the data points as explained here). It is a is very useful initial estimates before the optimization.
CA results
When computing of the CA task is performed, it is possible to have the results in the “Results” frame. Two tables are proposed.
Compartmental analysis results per individual
Individual estimates of the CA parameters are displayed in the table in the tab “INDIV. ESTIM.” part as in the following figure
All the computed parameters depend on the chosen model. Notice that on all tables, there is an icon on the top right to copy the table in a word or an excel file.
Statistics on compartmental analysis results
A summary table is also proposed in the tab “SUMMARY” as in the following figure
All the summary calculation is described here. Notice that on all tables, there is an icon on the top right to copy the table in a word or an excel file.
CA plots
In the “Plots” frame, numerous plot associated to the individual parameters are displayed.
 Individual fits: The purpose of this plot is to display fit for each individual.
 Correlation between CA parameters: The purpose of this plot is to display scatter plots for each pair of parameters. It allows to identify correlations between parameters, which can be used to see the results of your analysis and see the coherence of the parameters for each individuals.
 Distribution of the CA parameters: The purpose of this plot is to see the empirical distribution of the parameters and thus have an idea of their distribution over the individuals.
 CA parameters w.r.t. covariates: The purpose of this plot is to display the individual parameters as a function of the covariates. It allows to identify correlation effects between the individual parameters and the covariates.
CA outputs
After running the CA task, the following files are available in the result folder:
 caSummary.txt contains the summary of the CA parameters calculation, in a format easily readable by a human (but not easy to parse for a computer)
 caIndividualParametersSummary.txt contains the summary of the CA parameters in a friendly computer format.
 The first column corresponds to the name of the parameters
 The other columns correspond to the several elements describing the summary of the parameters (as explained here)
 caIndividualParameters.txt contains the CA parameters for each subjectoccasion along with the covariates.
 The first line corresponds to the name of the parameters
 The other lines correspond to the value of the parameters
The files caIndividualParametersSummary.txt and caIndividualParameters.txt can be exported in R for example using the following command
read.table("/path/to/file.txt", sep = ",", header = T)
Remark
 The separator is the one defined in the user preferences. We set “,” in this example as it is the one by default.
4.1.CA settings
The following page describes all the settings for the parameters calculations.
Model
These settings corresponds to the model used for the individual fit of the data set. This model corresponds to a PK model from the MonolixSuite PK library.
The PK library includes model with different administration routes (bolus, infusion, firstorder absorption, zeroorder absorption, with or without Tlag), different number of compartments (1, 2 or 3 compartments), and different types of eliminations (linear or MichaelisMenten). More details, including the full equations of each model, can be found on the dedicated page for the model libraries.
The PK library models can be used with single or multiple doses data, but they allow one type of administration in the data set (only oral or only bolus, but not some individuals with bolus and some with oral for instance).
When you click on “SELECT”, the list of available model files appear, as well as a menu to filter them. Use the filters and indications in the file name (parameters names) to select the model file you need.
Along with the selected model, you have the initial parameters to define. To evaluate graphically the impact of these parameters, you can go to the “Check Init.” tab.
Calculations settings
These settings corresponds to the settings impacting the calculation of the CA parameters.
 Weighting: Type of weighting objective function. Possible methods are “uniform”, “Yobs”, “Ypred”, “Ypred2” or “Yobs2” (default value is “Yobs2”).
 Pool fit: If FALSE (default), fit is with individual parameters or, if TRUE, with the same parameters for all individuals.
 Method for BLQ: Method to replace the BLQ data. Possible options are: “zero”, “LOQ”, “LOQ2” or “missing” (default value is “missing”).
4.2.CA check initial parameters
When clicking on the “Check init.”, the predictions obtained with the initial values are displayed for each individual together with the data points. This feature is very useful to find some “good” initial values. You can change the values of the parameters and see how the agreement with the data change. In addition, you can change the axis to logscale and choose the same limit on all axis to have a better comparison of the individuals.
On the bottom (in the green box), you have all the parameters and can play with them and see directly the impact on the prediction (in red) for each individual. In addition, there is an “AUTOINIT” button (on the blue block on the right). It allows to provide automatically good initial estimates of all the parameters as in the following example. To set the new parameters as initial values for the calculation, you need to click on “SET AS INITIAL VALUES”. This will bring you back to the settings for the CA parameters calculation.
4.3.CA individual fits
Purpose
The figure displays the observed data for each subject, as well as prediction using the individual parameters.
Individual parameters
Information on individual parameters can be used in two ways, as shown below. By clicking on Information (marked in green on the figure) in the General panel, individual parameter values can be displayed on each individual plot. Moreover, the plots can be sorted according to the values for a given parameter, in ascending or descending order (Sorting panel marked in blue). By default, the individual plots are sorted by subject id, with the same order as in the data set.
Special zoom
Userdefined constraints for the zoom are available. They allow to zoom in according to one axis only instead of both axes. Moreover, a link between plots can be set in order to perform a linked zoom on all individual plots at once. This is shown on the figure below with observations from the remifentanil example, and individual fits from a twocompartment model. It is thus possible to focus on the same time range or observation values for all individuals. In this example it is used to zoom on time on the elimination phase for all individuals, while keeping the Y axis in log scale unchanged for each plot.

Censored data
When a data is censored, this data is different to a “classical” observation and has thus a different representation. We represent it as a bar from the censored value specified in the data set and the associated limit.
Settings
 Grid arrange. The user can define the number of subjects that are displayed, as well as the number of rows and the number of columns. Moreover, a slider is present to be able to change the subjects under consideration.
 General
 Legend: hide/show the legend. The legends adapts automatically to the elements displayed on the plot. The same legend box applies to all subplots and it is possible to drag and drop the legend at the desired place.
 Grid : hide/show the grid in the background of the plots.
 Information: hide/show the individual parameter values for each subject (conditional mode or conditional mean depending on the “Individual estimates” choice is the setting section “Display”).
 Dosing times: hide/show dosing times as vertical lines for each subject.
 Link between plots: activate the linked zoom for all subplots. The same zooming region can be applied on all individuals only on the xaxis, only on the Yaxis or on both (option “none”).
 Display
 Observed data: hide/show the observed data.
 Censored intervals [if censored data present]: hide/show the data marked as censored (BLQ), shown as a rectangle representing the censoring interval (for instance [0, LOQ]).
 Split occasions [if IOV present]: Split the individual subplots by occasions in case of IOV.
 Number of points of the calculation for the prediction
 Sorting: Sort the subjects by ID or individual parameter values in ascending or descending order.
By default, only the observed data and the individual fits are displayed.
4.4.CA parameters with respects to covariate
Purpose
The figure displays the individual parameters as a function of the covariates. It allows to identify correlation effects between the individual parameters and the covariates.
Identifying correlation effects
In the example below, we can see the parameters Cl and V1 with respect to the covariates: the weight WT, the age AGE and the sex category.
Visual guidelines
In order to help identifying correlations, regression lines, spline interpolations and correlation coefficients can be overlaid on the plots for continuous covariates. Here we can see a strong correlation between the parameter Cl and the age and the weight. This is the same with V1.
Highlight
Hovering on a point reveals the corresponding individual and, if multiple individual parameters have been simulated from the conditional distribution for each individual, highlights all the points points from the same individual. This is useful to identify possible outliers and subsequently check their behavior in the observed data.
Selection
It is possible to select a subset of covariates or parameters, as shown below. In the selection panel, a set of contiguous rows can be selected with a single extended click, or a set of noncontiguous rows can be selected with several clicks while holding the Ctrl key. This is useful when there are many parameters or covariates.
Stratification
Stratification can be applied by creating groups of covariate values. As can be seen below, these groups can then be split, colored or filtered, allowing to check the effect of the covariate on the correlation between two parameters. The correlation coefficient is updated according to the split or filtering.
Settings
 General
 Legend and grid : add/remove the legend or the grid. There is only one legend for all plots.
 Information: display/hide the correlation coefficient associated with each scatter plot.
 Display
 Selection. The user can select some of the parameters or covariates to display only the corresponding plots. A simple click selects one parameter (or covariate), whereas multiple clicks while holding the Ctrl key selects a set of parameters.
 Visual cues. Add/remove a regression line or a spline interpolation.
4.5.Distribution of the CA parameters
Purpose
This figure can be used to see the empirical distribution of the CA parameters. Further analysis such as stratification by covariate can be performed and will be detailed below.
PDF and CDF
It is possible to display the theoretical distribution and the histogram of the empirical distribution as proposed below.
The distributions are represented as histograms for the probability density function (PDF). Hovering on the histogram also reveals the density value of each bin as shown on the figure below
Cumulative distribution functions (CDF) is proposed too.
Example of stratification
It is possible to stratify the population by some covariate values and obtain the distributions of the individual parameters in each group. This can be useful to check covariate effect, in particular when the distribution of a parameter exhibits two or more peaks for the whole population. On the following example, the distribution of the parameter k from the same example as above has been split for two groups of individuals according to the value of the SEX, allowing to visualize two clearly different distributions.
Settings
 General: add/remove the legend, and the grid
 Display
 Distribution function: The user can choose to display either the probability density function (PDF) as histogram or the cumulative distribution function (CDF).
4.6.Correlation between CA parameters
Purpose
This plot displays scatter plots for each pair of parameters. It allows to identify correlations between parameters, which can be used to see the results of your analysis and see the coherence of the parameters for each individuals.
Example
In the following example, one can see pairs of parameters estimated for all parameters.
Visual guidelines
In addition to regression lines, correlation coefficients can been added to see the correlation between random effects, as well as spline interpolations.
Selection
It is possible to select a subset of parameters, whose pairs of correlations are then displayed, as shown below. In the selection panel, a set of contiguous rows can be selected with a single extended click, or a set of noncontiguous rows can be selected with several clicks while holding the Ctrl key.
Highlight
Similarly to other plots, hovering on a point provides information on the corresponding subject id, and highlights other points corresponding to the same individual.
Stratification: coloring and filtering
Stratification can be applied by creating groups of covariate values. As can be seen below, these groups can then be split, colored and/or filtered allowing to check the effect of the covariate on the correlation between two parameters. The correlation coefficient is updated according to the stratifying action. In the following case, We split by the covariate SEX and color bay 2 categories of AGE.
Settings
 General
 Legend and grid : add/remove the legend or the grid. There is only one legend for all plots.
 Information: display/hide the correlation coefficient associated with each scatter plot.
 Display
 Selection. The user can select some of the parameters to display only the corresponding scatter plots. A simple click selects one parameter, whereas multiple clicks while holding the Ctrl key selects a set of parameters.
 Visual cues. Add/remove the regression line or the spline interpolation.
5.R functions to run PKanalix
On the use of a Rfunctions
PKanalix can be called via Rfunctions. It is possible to have access to the project exactly in the same way as you would do with the interface. All the functions are described below.
Installation and initialization
All the installation guidelines and initialization procedure can be found here.
Description of the functions concerning the project management
 getData: Get a description of the data used in the current project.
 getInterpretedData: Get the data set interpreted by PKanalix.
 getStructuralModel: Get the model file for the structural model used in the current project.
 isProjectLoaded: CHeck if the project is loaded.
 loadProject: Load a project by parsing the Mlxtranformated file whose path has been given as an input.
 newProject: Create a new empty project providing model and data specification.
 saveProject: Save the current project as an Mlxtranformated file.
 setData: Set project data giving a data file and specifying headers and observations types.
 setStructuralModel: Set the structural model.
Description of the functions concerning the scenario
 fillInitialParametersByAutoInit: Run automatic calculation of optimized parameters for CA initial parameters.
 runBioequivalenceEstimation: Estimate the bioequivalence for the selected parameters.
 runCAEstimation: Estimate the CA parameters for each individual of the project.
 runEstimation: Run the NCA analysis and the CA analysis if the structural model for the CA calculation is defined.
 runNCAEstimation: Estimate the NCA parameters for each individual of the project.
Description of the functions concerning the results
 getBioequivalenceResults: Get results for different steps in bioequivalence analysis.
 getCAIndividualParameters: Get the estimated values for each subject of some of the individual CA parameters of the current project.
 getCAParameterStatistics: Get statistics over the estimated values of some of the CA parameters of the current project.
 getCAResultsStratification: Get the stratification used to compute NCA parameters stratistics table.
 getNCAIndividualParameters: Get the estimated values for each subject of some of the individual NCA parameters of the current project.
 getNCAParameterStatistics: Get statistics over the estimated values of some of the NCA parameters of the current project.
 getNCAResultsStratification: Get the stratification used to compute NCA parameters stratistics table.
 getPointsIncludedForLambdaZ: Get the points associated to the lambdaz calculation.
 setCAResultsStratification: Set the stratification used to compute NCA parameters stratistics table.
 setNCAResultsStratification: Set the stratification used to compute NCA parameters stratistics table.
 getResultsStratificationGroups: Get the stratification covariate groups used to compute statistics over individual parameters.
 setResultsStratificationGroups: Set the stratification covariate groups used to compute statistics over individual parameters.
Description of the functions concerning the data
 addAdditionalCovariate: Create an additional covariate for stratification purpose.
 applyFilter: Apply a filter on the current data.
 createFilter: Create a new filtered data set by applying a filter on an existing one and/or complementing it.
 deleteAdditionalCovariate: Delete a created additinal covariate.
 deleteFilter: Delete a data set.
 editFilter: Edit the definition of an existing filtered data set.
 getAvailableData: Get information about the data sets and filters defined in the project.
 getCovariateInformation: Get the name, the type and the values of the covariates present in the project.
 getObservationInformation: Get the name, the type and the values of the observations present in the project.
 getTreatmentsInformation: .
 removeFilter: Remove the last filter applied on the current data set.
 renameAdditionalCovariate: Rename an existing additional covariate.
 renameFilter: Rename an existing filtered data set.
 selectData: Select the new current data set within the previously defined ones (original and filters).
Description of the functions concerning the compartmental and non compartmental analysis settings
 getBioequivalenceSettings: Get the settings associated to the bioequivalence estimation.
 getCASettings: Get the settings associated to the compartmental analysis.
 getDataSettings: Get the data settings associated to the non compartmental analysis.
 getGlobalObsIdToUse: Get the global observation id used in both the compartmental and non compartmental analysis.
 getNCASettings: Get the settings associated to the non compartmental analysis.
 setBioequivalenceSettings: Set the value of one or several of the settings associated to the bioequivalence estimation.
 setCASettings: Set the settings associated to the compartmental analysis.
 setDataSettings: Set the value of one or several of the data settings associated to the non compartmental analysis.
 setGlobalObsIdToUse: Get the global observation id used in both the compartmental and non compartmental analysis.
 setNCASettings: Set the value of one or several of the settings associated to the non compartmental analysis.
Description of the functions concerning the plot generation
 getChartsData: Compute Charts data with custom stratification options and custom computation settings.
 plotBivariateDataViewer: Generate Bivariate observations plots.
 plotCovariates: Generate covariate viewer plot.
 plotObservedData: Get the oberved data plot.
 plotNCAIndividualFits: Get the individual fits with the NCA inputs.
 plotNCAParametersCorrelation: Get the correlation between the NCA individual parameters plot.
 plotNCAParametersDistribution: Get the NCA individual parameters distribution plot.
 plotNCAParametersVsCovariates: Get the individual NCA parameter vs covariate plot.
 plotCAIndividualFits: Get the individual fits with the CA inputs.
 plotCAParametersCorrelation: Get the correlation between the CA individual parameters plot.
 plotCAParametersDistribution: Get the CA individual parameters distribution plot.
 plotCAParametersVsCovariates: Get the individual CA parameter vs covariate plot.
 plotBEConfidenceIntervals: Plot the BE confidence interval.
 plotBESequenceByPeriod: Plot Bioequivalence Sequenced parameters by period.
 plotBESubjectByFormulation: Plot Bioequivalence parameters by formulation.
 getPlotPreferences Get the plots preferences.
 resetPlotPreferences Reset the plot preferences.
 setPlotPreferences Set the plot preferences.
Description of the functions concerning preferences and project settings
 getPreferences: Get a summary of the project preferences.
 getProjectSettings: Get a summary of the project settings.
 setPreferences: Set the value of one or several of the project preferences.
 setProjectSettings: Set the value of one or several of the settings of the project.
Example
Example 1 : Setup of a PKanalix project, run and retrieving individual NCA parameters
Below is an example of the functions to call to run an NCA and CA analysis from scratch using one of the demo data sets.
# load library and initialize the API
library(lixoftConnectors)
initializeLixoftConnectors(software="pkanalix")
# create a new project by setting a data set dataPath = paste0(getDemoPath(),'/2.case_studies/data/M2000_ivbolus_singledose.csv') newProject(data = list(dataFile = dataPath, headerTypes = c('id','time','amount','observation', 'catcov','contcov','contcov','contcov'), observationTypes = 'continuous')) # set the options for the NCA analysis setNCASettings(administrationtype = list("1"="intravenous"), integralMethod = "LinLogTrapLinLogInterp", lambdaRule="adjustedR2") # run the NCA analysis runNCAEstimation() # retrieve the output of interest indivParams < getNCAIndividualParameters("AUCINF_pred","Cmax")
The estimated NCA parameters can then be further analyzed using typical R functions, and plotted.
In the code below, the thirdparty R package flextable is used to generate nicelooking table.
library(flextable) ft < flextable(indivParams$parameters) ft
Example 2 : Plotting the individual data as spaghetti plot or with mean per treatment group
This demo dataset corresponds to a 2x2x2 crossover design with a ref and a ref formulation. Using the new (2021 version) “plot” functions, the data can be plotted with individual profiles or with mean profiles calculated for the two different treatments and overlayed on a single plot. As the “plot” functions return a ggplot2 object, additional ggplot2 commands can be used to further customize the plot.
library(lixoftConnectors) library(ggplot2) initializeLixoftConnectors(software="pkanalix") # loading the project from the demo folder loadProject(paste0(getDemoPath(),"/3.bioequivalence/project_crossover_bioequivalence.pkx")) # plotting the individual profiles colored by id and split by formulation plotObservedData(settings=list(dots=T,lines=T,mean=F,error=F, ylog=T, ylab="Concentration (ng/mL)", xlab="Time (hr)", grid=F, cens=F), stratify=list(colorGroup=list(name="ID"), splitGroup=list(name="FORM")))+ scale_x_continuous(breaks=seq(0, 72, by=12)) # plotting the mean profile with standard deviation for the test and ref formulation plotObservedData(settings=list(dots=F,lines=F,mean=T,error=T,meanMethod="geometric", ylog=T, ylab="Concentration (ng/mL)", xlab="Time (hr)", grid=F), stratify=list(colorGroup=list(name="FORM"),colors=c("#00a4c6","#ff793f")))+ scale_x_continuous(breaks=seq(0, 72, by=12))
5.1.Rpackage installation and initialization
In this page, we present the installation procedure of the Rpackage lixoftConnectors that allow to run PKanalix from R.
Installation
The R package lixoftConnectors is located in the installation directory as tar.gz ball. It can be installed directly using Rstudio (Tools > Install packages > from package archive file) or by the following R command:
install.packages(packagePath, repos = NULL, type="source", INSTALL_opts ="nomultiarch")
with packagePath = ‘<installDirectory>/connectors/lixoftConnectors.tar.gz’ where <installDirectory> is the MonolixSuite installation directory.
With the default installation directory, the command is:
# for Windows OS install.packages("C:/ProgramData/Lixoft/MonolixSuite2019R1/connectors/lixoftConnectors.tar.gz", repos = NULL, type="source", INSTALL_opts ="nomultiarch")
# for Mac OS install.packages("/Applications/MonolixSuite2019R1.app/Contents/Resources/monolixSuite/connectors/lixoftConnectors.tar.gz", repos = NULL, type="source", INSTALL_opts ="nomultiarch")
The lixoftConnectors package depends on the RJSONIO package that may need to be installed from CRAN first using:
install.packages('RJSONIO')
Initializing
When starting a new R session, you need to load the library and initialize the connectors with the following commands
library(lixoftConnectors) initializeLixoftConnectors(software = "pkanalix")
In some cases, it may be necessary to specify the path to the installation directory of the Lixoft suite. If no path is given, the one written in the <user home>/lixoft/lixoft.ini file is used (usually “C:/ProgramData/Lixoft/MonolixSuiteXXXX” for Windows) where XXXX corresponds to the version of MonolixSuite.
library(lixoftConnectors) initializeLixoftConnectors(software = "pkanalix", path = "/path/to/MonolixSuite/")
Making sure the installation is ok
To test if the installation is ok, you can load and run a project from the demos as on the following:
demoPath = '<userFolder>/lixoft/pkanalix/pkanalix2019R1/demos/1.basic_examples/' loadProject(paste0(demoPath ,'project_ivbolus.pkx')) runNCAEstimation() getNCAIndividualParameters()
where <userFolder> is the user’s home folder (on windows C:/Users/toto if toto is your username). These three commands should output the estimated NCA parameters.
5.2.Description of the R functions associated to PKanalix project's management
Description of the functions of the API
getData  Get a description of the data used in the current project. 
getInterpretedData  Get the data set interpreted by PKanalix. 
getStructuralModel  Get the model file for the structural model used in the current project. 
isProjectLoaded  Check if the project is loaded. 
loadProject  Load a project by parsing the Mlxtranformated file whose path has been given as an input. 
newProject  Create a new empty project providing model and data specification. 
saveProject  Save the current project as an Mlxtranformated file. 
setData  Set project data giving a data file and specifying headers and observations types. 
setStructuralModel  Set the structural model. 
[Monolix – PKanalix] Get project data
Description
Get a description of the data used in the current project. Available informations are:
 dataFile (string): path to the data file
 header (array<character>): vector of header names
 headerTypes (array<character>): vector of header types
 observationNames (vector<string>): vector of observation names
 observationTypes (vector<string>): vector of observation types
 nbSSDoses (int): number of doses (if there is a SS column)
Usage
getData()
Value
A list describing project data.
See Also
Click here to see examples
data = getData()
data
> $dataFile
“/path/to/data/file.txt”
$header
c(“ID”,”TIME”,”CONC”,”SEX”,”OCC”)
$headerTypes
c(“ID”,”TIME”,”OBSERVATION”,”CATEGORICAL COVARIATE”,”IGNORE”)
$observationNames
c(“concentration”)
$observationTypes
c(concentration = “continuous”)
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
[Monolix – PKanalix] Get interpreted project data
Description
[Monolix – PKanalix] Get interpreted project data
Usage
getInterpretedData()
[Monolix – PKanalix – Simulx] Get structural model file
Description
Get the model file for the structural model used in the current project.
For Simulx, this function will return the structural model only if the project was imported from Monolix, and NULL otherwise.
Usage
getStructuralModel()
Value
A string corresponding to the path to the structural model file.
See Also
Click here to see examples
getStructuralModel() => “/path/to/model/inclusion/modelFile.txt”
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
[Monolix – PKanalix – Simulx] Get current project load status
Description
[Monolix – PKanalix – Simulx] Get current project load status
Usage
isProjectLoaded()
Value
TRUE if a project is currently loaded, FALSE otherwise
[Monolix – PKanalix – Simulx] Load project from file
Description
Load a project by parsing the Mlxtranformated file whose path has been given as an input.
The extensions are .mlxtran for Monolix, .pkx for PKanalix, and .smlx for Simulx.
WARNING: R is sensitive between ‘\’ and ‘/’, only ‘/’ can be used.
Usage
loadProject(projectFile)
Arguments
projectFile 
(character) Path to the project file. Can be absolute or relative to the current working directory. 
See Also
Click here to see examples
loadProject(“/path/to/project/file.mlxtran”) for Linux platform
loadProject(“C:/Users/path/to/project/file.mlxtran”) for Windows platform
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
[Monolix – PKanalix – Simulx] Create new project
Description
Create a new empty project providing model and data specification. The data specification is:
 Monolix, PKanalix

 dataFile (string): path to the data file
 headerTypes (array<character>): vector of headers
 observationTypes [optional] (list): a list, indexed by observation name, giving the type of each observation present in the data file. If omitted, all the observations will be considered as “continuous”
 nbSSDoses (int): number of steadystate doses (if there is a SS column)
 mapping [optional](list): a list giving the observation name associated to each ytype present in the data file
(this field is mandatory when there is a column tagged with the “obsid” headerType)
Please refer to
setData
documentation for a comprehensive description of the “data” argument structure.  Monolix only

 projectFile (string): path to the datxplore or pkanalix project file defining the data
Usage
newProject(modelFile = NULL, data = NULL)
Arguments
modelFile 
(character) Path to the model file. Can be absolute or relative to the current working directory. To use a model from the libraries, you can set modelFile = “lib:modelName.txt”, e.g. modelFile = “lib:oral1_1cpt_kaVCl.txt” 
data 
(list) Structure describing the data. In case of PKanalix, data is mandatory and modelFile is optional (used only for the CA part and must be from the library). In case of Monolix, data and modelFile are mandatory. It can be replaced by a projectFile corresponding to a Datxplore or PKanalix project file. In case of Simulx, modelFile is mandatory and data = NULL. It can be replaced by a projectFile corresponding to a Monolix project. In that case, the Monolix project will be imported into Simulx. 
See Also
Click here to see examples
newProject(data = list(dataFile = “/path/to/data/file.txt”,
headerTypes = c(“IGNORE”, “OBSERVATION”),
observationTypes = “continuous”),
modelFile = “/path/to/model/file.txt”)
newProject(data = list(dataFile = “/path/to/data/file.txt”,
headerTypes = c(“IGNORE”, “OBSERVATION”, “OBSID”),
observationTypes = list(concentration = “continuous”, effect = “discrete”),
mapping = list(“1” = “concentration”, “2” = “effect”)),
modelFile = “/path/to/model/file.txt”)
[Monolix only]
newProject(data = list(projectFile = “/path/to/project/file.datxplore”),
modelFile = “/path/to/model/file.txt”)
[Simulx only]
newProject(modelFile = “/path/to/model/file.txt”)
new project from an import of a structural model
newProject(modelFile = “/path/to/monolix/project/file.mlxtran”)
new project from an import of a monolix model
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
[Monolix – PKanalix – Simulx] Save current project
Description
Save the current project as an Mlxtranformated file.
The extensions are .mlxtran for Monolix, .pkx for PKanalix, and .smlx for Simulx.
WARNING: R is sensitive between ‘\’ and ‘/’, only ‘/’ can be used.
Usage
saveProject(projectFile = "")
Arguments
projectFile 
[optional](character) Path where to save a copy of the current mlxtran model. Can be absolute or relative to the current working directory. If no path is given, the file used to build the current configuration is updated. 
See Also
Click here to see examples
[PKanalix only]
saveProject(“/path/to/project/file.pkx”) # save a copy of the model
[Monolix only]
saveProject(“/path/to/project/file.mlxtran”) # save a copy of the model
[Simulx only]
saveProject(“/path/to/project/file.smlx”) # save a copy of the model
[Monolix – PKanalix – Simulx]
saveProject() # update current model
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
[Monolix – PKanalix] Set project data
Description
Set project data giving a data file and specifying headers and observations types.
Usage
setData(dataFile, headerTypes, observationTypes, nbSSDoses = NULL)
Arguments
dataFile 
(character): Path to the data file. Can be absolute or relative to the current working directory. 
headerTypes 
(array<character>): A collection of header types. The possible header types are: “ignore”, “id”, “time”, “observation”, “amount”, “contcov”, “catcov”, “occ”, “evid”, “mdv”, “obsid”, “cens”, “limit”, “regressor”,”admid”, “rate”, “tinf”, “ss”, “ii”, “addl”, “date”. Notice that these are not the types displayed in the interface, these one are shortcuts. 
observationTypes 
[optional] (list): A list giving the type of each observation present in the data file. If there is only one ytype, the corresponding observation name can be omitted. The possible observation types are “continuous”, “discrete”, and “event”. 
nbSSDoses 
[optional](int): Number of doses (if there is a SS column). 
See Also
Click here to see examples
setData(dataFile = “/path/to/data/file.txt”,
headerTypes = c(“IGNORE”, “OBSERVATION”), observationTypes = “continuous”)
setData(dataFile = “/path/to/data/file.txt”,
headerTypes = c(“IGNORE”, “OBSERVATION”, “YTYPE”),
observationTypes = list(Concentration = “continuous”, Level = “discrete”))
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
[Monolix – PKanalix] Set structural model file
Description
Set the structural model.
NOTE: In case of PKanalix, the user can only use a structural model from the library for the CA analysis. Thus, the structura model should be written ‘lib:modelFromLibrary.txt’.
Usage
setStructuralModel(modelFile)
Arguments
modelFile 
(character) Path to the model file. Can be absolute or relative to the current working directory. 
See Also
Click here to see examples
setStructuralModel(“/path/to/model/file.txt”) # for Monolix
setStructuralModel(“‘lib:oral1_2cpt_kaClV1QV2.txt'”) # for PKanalix or Monolix
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
5.3.Description of the R functions associated to PKanalix scenario
Description of the functions of the API
fillInitialParametersByAutoInit  Run automatic calculation of optimized parameters for CA initial parameters. 
runBioequivalenceEstimation  Estimate the bioequivalence for the selected parameters. 
runCAEstimation  Estimate the CA parameters for each individual of the project. 
runEstimation  Run the NCA analysis and the CA analysis if the structural model for the CA calculation is defined. 
runNCAEstimation  Estimate the NCA parameters for each individual of the project. 
[PKanalix] Automatically estimate initial parameters values.
Description
Run automatic calculation of optimized parameters for CA initial parameters.
Usage
fillInitialParametersByAutoInit(parameters)
Arguments
parameters 
(double) Initial values to optimized in the same format as initialvalues returned by getCASettings

Click here to see examples
getCASettings() > parameters
fillInitialParamtersByAutoInit(parameters$initialvalues) > optimiezdParameters
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
[PKanalix] Estimate the bioequivalence.
Description
Estimate the bioequivalence for the selected parameters.
Usage
runBioequivalenceEstimation()
Click here to see examples
runNCAEstimation()
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
[PKanalix] Estimate the individual parameters using compartmental analysis.
Description
Estimate the CA parameters for each individual of the project.
Usage
runCAEstimation()
Click here to see examples
runCAEstimation()
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
[PKanalix] Run both non compartmental and compartmental analysis.
Description
Run the NCA analysis and the CA analysis if the structural model for the CA calculation is defined.
Usage
runEstimation()
Click here to see examples
runEstimation()
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
[PKanalix] Estimate the individual parameters using non compartmental analysis.
Description
Estimate the NCA parameters for each individual of the project.
Usage
runNCAEstimation()
Click here to see examples
runNCAEstimation()
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
5.4.Description of the R functions associated to PKanalix results
Description of the functions of the API
getBioequivalenceResults  Get results for different steps in bioequivalence analysis. 
getCAIndividualParameters  Get the estimated values for each subject of some of the individual CA parameters of the current project. 
getCAParameterStatistics  Get statistics over the estimated values of some of the CA parameters of the current project. 
getCAResultsStratification  Get the stratification used to compute NCA parameters stratistics table. 
getNCAIndividualParameters  Get the estimated values for each subject of some of the individual NCA parameters of the current project. 
getNCAParameterStatistics  Get statistics over the estimated values of some of the NCA parameters of the current project. 
getNCAResultsStratification  Get the stratification used to compute NCA parameters stratistics table. 
getPointsIncludedForLambdaZ  Get the points used for the lambdaz calculation. 
setCAResultsStratification  Set the stratification used to compute NCA parameters stratistics table. 
setNCAResultsStratification  Set the stratification used to compute NCA parameters stratistics table. 
getResultsStratificationGroups  Get the stratification covariate groups used to compute statistics over individual parameters. 
setResultsStratificationGroups  Set the stratification covariate groups used to compute statistics over individual parameters. 
[PKanalix] Get Bioequivalence results
Description
Get results for different steps in bioequivalence analysis.
Usage
getBioequivalenceResults(...)
Arguments
... 
(string) Name of the step whose values must be displayed : "anova", "coefficientsOfVariation", "confidenceIntervals" 
Click here to see examples
bioeqResults = getBioequivalenceResults() # retrieve all the results values.
bioeqResults = getBioequivalenceResults(“anova”, “confidenceIntervals”) # retrieve anova and confidence intervals results.
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
[PKanalix] Get CA individual parameters
Description
Get the estimated values for each subject of some of the individual CA parameters of the current project.
Usage
getCAIndividualParameters(...)
Arguments
... 
(string) Name of the individual parameters whose values must be displayed. 
Value
A data frame giving the estimated values of the individual parameters of interest for each subject
and a list of information relative to these parameters (units)
Click here to see examples
indivParams = getCAIndividualParameters() # retrieve all the available individual parameters values.
indivParams = getCAIndividualParameters(“ka”, “V”) # retrieve ka and V values for all individuals.
$parameters
id ka V
1 0.8 1.2
. … …
N 0.4 2.2
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
[PKanalix] Get CA parameter statistics
Description
Get statistics over the estimated values of some of the CA parameters of the current project.
Statistics are computed on the different sets of individuals resulting from the stratification settings previously set.
Usage
getCAParameterStatistics(...)
Arguments
... 
(string) Name of the parameters whose values must be displayed. 
Value
A data frame giving the statistics over the parameters of interest, and a list of information relative to these parameters (units)
See Also
setCAResultsStratification
getCAResultsStratification
setResultsStratificationGroups
Click here to see examples
indivParams = getCAParameterStatistics()
# retrieve all the available parameters values.
indivParams = getCAParameterStatistics(“ka”, “V”)
# retrieve ka and V values for all individuals.
$parameters
parameter min Q1 median Q3 max mean SD SE CV geoMean geoSD
ka 0.05742669 0.08886395 0.1186787 0.1495961 0.1983748 0.1221367 0.03898449 0.007957675 31.91873 0.1159751 1.398436
V 7.859237 13.51599 23.00674 30.73677 43.39608 22.95211 10.99187 2.243707 47.89047 20.23981 1.704716
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
[PKanalix] Get CA results stratification
Description
Get the stratification used to compute NCA parameters stratistics table.
Stratification is defined by:
 stratification covariate groups which are shared by both NCA and CA results
 a stratification state which is specific to each task results
Usage
getCAResultsStratification()
Details
For each covariate, stratification groups can be defined as a list with:
name  string  covariate name 
definition  vector<double>(continuous)  list<vector<string>>(categorical)  group separations (continuous)  modality sets (categorical) 
A stratification state is represented as a list with:
split  vector<string>  ordered list of splitted covariates 
filter  list< pair<string, vector<int>> >  list of paired containing a covariate name and the indexes of associated kept groups 
Value
A list with stratification groups (‘groups’) and stratification state (‘state’).
See Also
Click here to see examples
getCAResultsStratification()
$groups
list(
list( name = “WEIGHT”,
definition = c(70),
type = “continuous”,
range = c(65,85) ),
list( name = “TRT”,
definition = list(c(“a”,”b”), “c”)
type = “categorical”,
categories = c(“a”,”b”,”c”) )
)
$state
$split
“WEIGHT”
$filter
list(list(“WEIGHT”, c(1,3), list(“TRT”, c(1)))
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
[PKanalix] Get NCA individual parameters
Description
Get the estimated values for each subject of some of the individual NCA parameters of the current project.
Usage
getNCAIndividualParameters(...)
Arguments
... 
(string) Name of the individual parameters whose values must be displayed. 
Value
A data frame giving the estimated values of the individual parameters of interest for each subject,
and a list of information relative to these parameters (units & CDISC names)
Click here to see examples
indivParams = getNCAIndividualParameters()
# retrieve the values of all the available parameters.
indivParams = getNCAIndividualParameters(“Tmax”,”Clast”)
# retrieve only the values of Tmax and Clast for all individuals.
$parameters
id Tmax Clast
1 0.8 1.2
. … …
N 0.4 2.2
$information
CDISC
Tmax TMAX
Clast CLST
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
[PKanalix] Get NCA parameter statistics
Description
Get statistics over the estimated values of some of the NCA parameters of the current project.
Statistics are computed on the different sets of individuals resulting from the stratification settings previously set.
Usage
getNCAParameterStatistics(...)
Arguments
... 
(string) Name of the parameters whose values must be displayed and a list of information relative to these parameters (units & CDISC names) 
See Also
setNCAResultsStratification
getNCAResultsStratification
setResultsStratificationGroups
Click here to see examples
statistics = getNCAParameterStatistics()
# retrieve the values of all the available parameters.
statistics = getNCAParameterStatistics(“Tmax”,”Clast”)
# retrieve only the values of Tmax and Clast for all individuals.
$statistics
parameter min Q1 median Q3 max mean SD SE CV Ntot Nobs Nmiss geoMean geoSD
Tmax 2.5 2.5 2.75 3 3 2.75 0.3535534 0.25 12.85649 2 2 0 2.738613 1.1376
Clast 0.76903 0.76903 0.85836 0.94769 0.94769 0.85836 0.1263317 0.08933 14.7178 2 2 0 0.853699 1.15918
$information
units CDISC
Tmax h Tmax
Clast mg.mL^1 Clast
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
[PKanalix] Get NCA results stratification
Description
Get the stratification used to compute NCA parameters stratistics table.
Stratification is defined by:
 stratification covariate groups which are shared by both NCA and CA results
 a stratification state which is specific to each task results
Usage
getNCAResultsStratification()
Details
For each covariate, stratification groups can be defined as a list with:
name  string  covariate name 
definition  vector<double>(continuous)  list<vector<string>>(categorical)  group separations (continuous)  modality sets (categorical) 
A stratification state is represented as a list with:
split  vector<string>  ordered list of splitted covariates 
filter  list< pair<string, vector<int>> >  list of paired containing a covariate name and the indexes of associated kept groups 
Value
A list with stratification groups (‘groups’) and stratification state (‘state’).
See Also
Click here to see examples
getNCAResultsStratification()
$groups
list(
list( name = “WEIGHT”,
definition = c(70),
type = “continuous”,
range = c(65,85) ),
list( name = “TRT”,
definition = list(c(“a”,”b”), “c”)
type = “categorical”,
categories = c(“a”,”b”,”c”) )
)
$state
$split
“WEIGHT”
$filter
list(“Span”, list(“TRT”, c(1)))
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
[PKanalix] Get points included in lambda_Z computation
Description
Get points used to compute lambda_Z in NCA estimation
Usage
getPointsIncludedForLambdaZ()
Click here to see examples
pointsIncluded = getPointsIncludedForLambdaZ()
ID time concentration BLQ includedForLambdaZ
1 0 0.0 0.00 0 0
2 0 0.5 3.05 0 1
3 0 2.0 5.92 1 1
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
[PKanalix] Set CA results stratification
Description
Set the stratification used to compute NCA parameters stratistics table.
Stratification is defined by:
 stratification covariate groups which are shared by both NCA and CA results
 a stratification state which is specific to each task results
Usage
setCAResultsStratification(
split = NULL,
filter = NULL,
groups = NULL,
state = NULL
)
Arguments
split 
(vector<string>) Ordered list of splitted covariates 
filter 
(list< pair<string, vector<int>> >) List of paired containing a covariate name and the indexes of associated kept groups 
groups 
Stratification groups list 
state 
Stratification state 
Details
For each covariate, stratification groups can be defined as a list with:
name  string  covariate name 
definition  vector<double>(continuous)  list<vector<string>>(categorical)  group separations (continuous)  modality sets (categorical) 
A stratification state is represented as a list with:
split  vector<string>  ordered list of splitted covariates 
filter  list< pair<string, vector<int>> >  list of paired containing a covariate name and the indexes of associated kept groups 
See Also
Click here to see examples
setCAResultsStratification(split = “SEX”)
setCAResultsStratification(split = c(“SEX”, “WEIGHT”))
setCAResultsStratification(filter = list(“SEX”, 1))
setCAResultsStratification(filter = list(list(“SEX”, 1), list(“WEIGHT”, c(1,3))))
setCAResultsStratification(split = “WEIGHT”, filter = list(list(“TRT”, c(1,2))),
groups = list(list(name = “WEIGHT”, definition = c(65,5, 72)), list(name = “TRT”, definition = list(c(“a”,”b”), “c”, c(“d”,”e”)))))
s = getCAResultsStratification()
setCAResultsStratification(state = s$state, groups = s$groups)
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
[PKanalix] Set NCA results stratification
Description
Set the stratification used to compute NCA parameters stratistics table.
Stratification is defined by:
 stratification covariate groups which are shared by both NCA and CA results
 a stratification state which is specific to each task results
Usage
setNCAResultsStratification(
split = NULL,
filter = NULL,
groups = NULL,
state = NULL
)
Arguments
split 
(vector<string>) Ordered list of splitted covariates 
filter 
(list< pair<string, vector<int>> >) List of paired containing a covariate name and the indexes of associated kept groups 
groups 
Stratification groups list 
state 
Stratification state 
Details
For each covariate, stratification groups can be defined as a list with:
name  string  covariate name 
definition  vector<double>(continuous)  list<vector<string>>(categorical)  group separations (continuous)  modality sets (categorical) 
A stratification state is represented as a list with:
split  vector<string>  ordered list of splitted covariates 
filter  list< pair<string, vector<int>> >  list of paired containing a covariate name and the indexes of associated kept groups 
Note: For acceptance criteria filtering, it is possible to give only the criterion name instead of a pair.
See Also
Click here to see examples
setNCAResultsStratification(split = “SEX”)
setNCAResultsStratification(split = c(“SEX”, “WEIGHT”))
setNCAResultsStratification(filter = “Span”)
setNCAResultsStratification(filter = list(“Span”, list(“SEX”, 1)))
setNCAResultsStratification(split = “WEIGHT”, filter = list(list(“TRT”, c(1,2))),
groups = list(list(name = “WEIGHT”, definition = c(65,5, 72)), list(name = “TRT”, definition = list(c(“a”,”b”), “c”, c(“d”,”e”)))))
s = getNCAResultsStratification()
setNCAResultsStratification(state = s$state, groups = s$groups)
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
[Monolix – PKanalix – Simulx] Get results stratification groups
Description
Get the stratification covariate groups used to compute statistics over individual parameters.
These groups are shared by all the task results.
Usage
getResultsStratificationGroups()
Details
For each covariate, stratification groups can be defined as a list with:
name  string  covariate name 
definition  vector<double>(continuous)  list<vector<string>>(categorical)  group separations (continuous)  modality sets (categorical) 
Value
Stratification groups list.
See Also
setResultsStratificationGroups
Click here to see examples
getResultsStratificationGroups()
list(
list( name = “WEIGHT”,
definition = c(70),
type = “continuous”,
range = c(65,85) ),
list( name = “TRT”,
definition = list(c(“a”,”b”), “c”)
type = “categorical”,
categories = c(“a”,”b”,”c”) )
)
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
[Monolix – PKanalix – Simulx] Set results stratification groups
Description
Set the stratification covariate groups used to compute statistics over individual parameters.
These groups are shared by all the task results.
Usage
setResultsStratificationGroups(groups)
Arguments
groups 
Stratification groups list 
Details
For each covariate, stratification groups can be defined as a list with:
name  string  covariate name 
definition  vector<double>(continuous)  list<vector<string>>(categorical)  group separations (continuous)  modality sets (categorical) 
See Also
getResultsStratificationGroups
Click here to see examples
setResultsStratificationGroups(list(list(name = “WEIGHT”, definition = c(65,5, 72)), list(name = “TRT”, definition = list(c(“a”,”b”), “c”, c(“d”,”e”)))))
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
5.5.Description of the R functions associated to the data set
Description of the functions of the API
addAdditionalCovariate  Create an additional covariate for stratification purpose. 
applyFilter  Apply a filter on the current data. 
createFilter  Create a new filtered data set by applying a filter on an existing one and/or complementing it. 
deleteAdditionalCovariate  Delete a created additinal covariate. 
deleteFilter  Delete a data set. 
editFilter  Edit the definition of an existing filtered data set. 
getAvailableData  Get information about the data sets and filters defined in the project. 
getCovariateInformation  Get the name, the type and the values of the covariates present in the project. 
getObservationInformation  Get the name, the type and the values of the observations present in the project. 
getTreatmentsInformation  . 
removeFilter  Remove the last filter applied on the current data set. 
renameAdditionalCovariate  Rename an existing additional covariate. 
renameFilter  Rename an existing filtered data set. 
selectData  Select the new current data set within the previously defined ones (original and filters). 
[Monolix – PKanalix] Add an additional covariate
Description
Create an additional covariate for stratification purpose. Notice that these covariates are available only if they are not
contant through the dataset.
Available column transformations are:
[continuous]  ‘firstDoseAmount’  (first dose amount) 
[continuous]  ‘doseNumber’  (dose number) 
[discrete]  ‘administrationType’  (admninistration type) 
[discrete]  ‘administrationSequence’  (administration sequence) 
[discrete]  ‘dosingDesign’  (dose multiplicity) 
[continuous]  ‘observationNumber’  (observation number per individual, for a given observation type) 
Usage
addAdditionalCovariate(transformation, base = "", name = "")
Arguments
transformation 
(string) applied transformation. 
base 
(string) [optional] base data on which the transformation is applied. 
name 
(string) [optional] name of the covariate. 
See Also
Click here to see examples
addAdditionalCovariate(“firstDoseAmount”)
addAdditionalCovariate(transformation = “observationNumberPerIndividual”, headerName = “CONC”)
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
[Monolix – PKanalix] Apply filter
Description
Apply a filter on the current data. Refere to createFilter
for more details about syntax, allowed parameters and examples.
Usage
applyFilter(filter, name = "")
Arguments
filter 
(list< list< action = "headerNamecomparatorvalue" > > or "complement") filter definition. 
name 
(string) [optional] created data set name. 
See Also
getAvailableData
createFilter
removeFilter
Click here to see examples
applyFilter( filter = list(selectLines = “CONC>=5.5”, removeLines = “CONC>10”))\cr
applyFilter( filter = list(selectLines = “y1!=2”) )\cr
applyFilter( filter = list(selectIds = “SEX==M”, selectIds = “WEIGHT<80”) )\cr
applyFilter( filter = “complement” )
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
[Monolix – PKanalix] Create filter
Description
Create a new filtered data set by applying a filter on an existing one and/or complementing it.
Usage
createFilter(filter, name = "", origin = "")
Arguments
filter 
(list< list< action = "headerNamecomparatorvalue" > > or "complement") [optional] filter definition. Existing actions are "selectLines", "selectIds", "removeLines" and "removeIds". First vector level is for set unions, the second one for set intersection. It is possible to give only a list of actions if there is only no highlevel union. 
name 
(string) [optional] created data set name. If not defined, the default name is "currentDataSet_filtered". 
origin 
(string) [optional] name of the data set to be filtered. The current one is used by default. 
Details
The possible actions are line selection (selectLines), line removal (removeLines), Ids selection (selectIds) or removal (removeIds).
The selection is a string containing the header name, a comparison operator and a value
selection = <string> "headerName*comparator**value" (ex: "id==’100’", "WEIGHT<70", "SEX!=’M’")
Notice that :
– The headerName corresponds to the data set header or one of the header aliases defined in MONOLIX software preferences
– The comparator possibilities are "==", "!=" for all types of value and "<=", "<", ">=", ">" only for numerical types
Syntax:
* create a simple filter:
createFilter( filter = list(act = sel)), e.g. createFilter( filter = list(removeIds = "WEIGHT<50"))
=> create a filter with the action act on the selection sel. In this example, we create a filter that removes all subjects with a weight less than 50.
* create a filter with several concurrent conditions, i.e AND condition:
createFilter( list(act1 = sel1, act2 = sel2)), e.g. createFilter( filter = list(removeIds = "WEIGHT<50", removeIds = " AGE<20"))
=> create a filter with both the action act1 on sel1 AND the action act2 on sel2. In this example, we create a filter that removes all subjects with a weight less than 50 and an age less than 20.
It corresponds to the intersecton of the subjects with a weight less than 50 and the subjects with an age less than 20.
* create a filter with several nonconcurrent conditions, i.e OR condition:
createFilter(filter = list(list(act1 = sel1), list(act2 = sel2)) ), e.g. createFilter( filter = list(list(removeIds = "WEIGHT<50"),list(removeIds = " AGE<20")))
=> create a filter with the action act1 on sel1 OR the action act2 on sel2. In this example, we create a filter that removes all subjects with a weight less than 50 and an age less than 20.
It corresponds to the union of the subjects with a weight less than 50 and the subjects with an age less than 20.
* It is possible to have any combinaison:
createFilter(filter = list(list(act1 = sel1), list(act2 = sel2, act3 = sel3)) ) <=> act1,sel1 OR ( act2,sel2 AND act3,sel3 )
* It is possible to create the complement of an existing filter:
createFilter(filter = "complement")
See Also
Click here to see examples
—————————————————————————————
LINE [ int ]
createFilter( filter = list(removeLines = “line>10”) ) # keep only the 10th first rows
—————————————————————————————
ID [ string  int ]
If there are only integer identifiers within the data set, ids will be considered as integers. On the contrary, they will be treated as strings.
createFilter( filter = list(selectIds = “id==100”) ) # select the subject called ‘100’
createFilter( filter = list(list(removeIds = “id!=’id_2′”)) ) # select all the subjects excepted the one called ‘id_2’
—————————————————————————————
ID INDEX [int]
createFilter( filter = list(list(removeIds = “idIndex!=2”), list(selectIds = “id<5”)) ) # select the 4 first subjects excepted the second one
—————————————————————————————
OCC [ int ]
createFilter( filter = list(selectIds = “occ1==1”, removeIds = “occ2!=3”) ) # select the subjects whose first occasion level is ‘1’ and whose second one is different from ‘3’
—————————————————————————————
TIME [ double ]
createFilter( filter = list(removeIds=’TIME>120′) ) # remove the subjects who have time over 120
createFilter( filter = list(selectLines=’TIME>120′) ) # remove the all the lines where the time is over 120
—————————————————————————————
OBSERVATION [ double ]
createFilter( filter = list(selectLines = “CONC>=5.5”, removeLines = “CONC>10”)) # select the lines where CONC value superior or equal to 5.5 or strictly higher than 10
createFilter( filter = list(removeIds = “CONC<0”) ) # remove subjects who have negative CONC values
createFilter( filter = list(removeIds = “E==0”) ) # remove subjects for who E equals 0
—————————————————————————————
OBSID [ string ]
createFilter( filter = list(removeIds = “y1==1”) ) # remove subject who have at least one observation for y1
createFilter( filter = list(selectLines = “y1!=2”) ) # select all lines corresponding to observations exepected those for y2
—————————————————————————————
AMOUNT [ double ]
createFilter( filter = list(selectIds = “AMOUT==10”) ) # select subjects who have a dose equals to 10
—————————————————————————————
INFUSION RATE AND INFUSION DURATION [ double ]
createFilter( filter = list(selectIds = “RATE<10”) ) # select subjects who have dose with a rate less than 10
—————————————————————————————
COVARIATE [ string (categorical)  double (continuous) ]
createFilter( filter = list(selectIds = “SEX==M”, selectIds = “WEIGHT<80”) ) # select subjects who are men and whose weight is lower than 80kg
—————————————————————————————
REGERSSOR [ double ]
createFilter( filter = list(selectLines = “REG>10”) ) # select the lines where the regressor value is over 10
—————————————————————————————
COMPLEMENT
createFilter(origin = “data_filtered”, filter = “complement” )
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
[Monolix – PKanalix] Delete additional covariate
Description
Delete a created additinal covariate.
Usage
deleteAdditionalCovariate(name)
Arguments
name 
(string) name of the covariate. 
See Also
Click here to see examples
deleteAdditionalCovariate(“firstDoseAmount”)\cr
deleteAdditionalCovariate(“observationNumberPerIndividual_y1”)
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
[Monolix – PKanalix] Delete filter
Description
Delete a data set. Only filtered data set which are not active and whose children are not active either can be deleted.
Usage
deleteFilter(name)
Arguments
name 
(string) data set name. 
See Also
Click here to see examples
deleteFilter(name = “filter2”)
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
[Monolix – PKanalix] Edit filter
Description
Edit the definition of an existing filtered data set. Refere to createFilter
for more details about syntax, allowed parameters and examples.
Notice that all the filtered data set which depend on the edited one will be deleted.
Usage
editFilter(filter, name = "")
Arguments
filter 
(list< list< action = "headerNamecomparatorvalue" > >) filter definition. 
name 
(string) [optional] data set name to edit (current one by default) 
See Also
[Monolix – PKanalix] Get data sets descriptions
Description
Get information about the data sets and filters defined in the project.
Usage
getAvailableData()
Click here to see examples
getAvailableData()
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
[Monolix – PKanalix] Get covariates information
Description
Get the name, the type and the values of the covariates present in the project.
Usage
getCovariateInformation()
Value
A list containing the following fields :
 name (vector<string>): covariate names
 type (vector<string>): covariate types. Existing types are "continuous", "continuoustransformed", "categorical", "categoricaltransformed"./
In Monolix mode, "latent" covariates are also allowed.  [Monolix] modalityNumber (vector<int>): number of modalities (for latent covariates only)
 covariate: a data frame giving the values of continuous and categorical covariates for each subject.
Latent covariate values exist only if they have been estimated, ie if the covariate is used and if the population parameters have been estimated.
CallgetEstimatedIndividualParameters
to retrieve them.
Click here to see examples
info = getCovariateInformation() # Monolix mode with latent covariates
info
> $name
c(“sex”,”wt”,”lcat”)
> $type
c(sex = “categorical”, wt = “continuous”, lcat = “latent”)
> $modalityNumber
c(lcat = 2)
> $covariate
id sex wt
1 M 66.7
. . .
N F 59.0
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
[Monolix – PKanalix] Get observations information
Description
Get the name, the type and the values of the observations present in the project.
Usage
getObservationInformation()
Value
A list containing the following fields :
 name (vector<string>): observation names.
 type (vector<string>): observation generic types. Existing types are "continuous", "discrete", "event".
 [Monolix] detailedType (vector<string>): observation specialized types set in the structural model. Existing types are "continuous", "bsmm", "wsmm", "categorical", "count", "exactEvent", "intervalCensoredEvent".
 [Monolix] mapping (vector<string>): mapping between the observation names (defined in the mlxtran project) and the name of the corresponding entry in the data set.
 ["obsName"] (data.frame): observation values for each observation id.
In PKanalix mode, the observation type and the mapping are not provided as there is only one used output and this one is necessarly continuous..
Click here to see examples
info = getObservationInformation()
info
> $name
c(“concentration”)
> $type # [Monolix]
c(concentration = “continuous”)
> $detailedType # [Monolix]
c(concentration = “continuous”)
> $mapping # [Monolix]
c(concentration = “CONC”)
> $concentration
id time concentration
1 0.5 0.0
. . .
N 9.0 10.8
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
[Monolix – PKanalix] Get treatment information
Description
[Monolix – PKanalix] Get treatment information
Usage
getTreatmentsInformation()
Value
A dataframe whose columns are:
 id and occasion level names (string)
 time (double)
 amount (double)
 [optional] administrationType (int)
 [optional] infusionTime (double)
 [optional] isArtificial (bool): is created from SS or ADDL column
 [optional] isReset (bool): IOV case only
Click here to see examples
## Not run:
}
Top of the page, PKanalix API, Monolix API, Simulx API.
[Monolix – PKanalix] Remove filter
Description
Remove the last filter applied on the current data set.
Usage
removeFilter()
See Also
Click here to see examples
removeFilter()
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
[Monolix – PKanalix] Rename additional covariate
Description
Rename an existing additional covariate.
Usage
renameAdditionalCovariate(oldName, newName)
Arguments
oldName 
(string) current name of the covariate to rename 
newName 
(string) new name. 
See Also
Click here to see examples
renameAdditionalCovariate(oldName = “observationNumberPerIndividual_y1”, newName = “nbObsForY1”)
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
[Monolix – PKanalix] Rename filter
Description
Rename an existing filtered data set.
Usage
renameFilter(newName, oldName = "")
Arguments
newName 
(string) new name. 
oldName 
(string) [optional] current name of the filtered data set to rename (current one by default) 
See Also
Click here to see examples
renameFilter(“newFilter”)\cr
renameFilter(oldName = “filter”, newName = “newFilter”)
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
[Monolix – PKanalix] Select data set
Description
Select the new current data set within the previously defined ones (original and filters).
Usage
selectData(name)
Arguments
name 
(string) data set name. 
See Also
Click here to see examples
selectData(name = “filter1”)
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
5.6.Description of the R functions associated to PKanalix settings
Description of the functions of the API
getBioequivalenceSettings  Get the settings associated to the bioequivalence estimation. 
getCASettings  Get the settings associated to the compartmental analysis. 
getDataSettings  Get the data settings associated to the non compartmental analysis. 
getGlobalObsIdToUse  Get the global observation id used in both the compartmental and non compartmental analysis. 
getNCASettings  Get the settings associated to the non compartmental analysis. 
setBioequivalenceSettings  Set the value of one or several of the settings associated to the bioequivalence estimation. 
setCASettings  Set the settings associated to the compartmental analysis. 
setDataSettings  Set the value of one or several of the data settings associated to the non compartmental analysis. 
setGlobalObsIdToUse  Get the global observation id used in both the compartmental and non compartmental analysis. 
setNCASettings  Set the value of one or several of the settings associated to the non compartmental analysis. 
[PKanalix] Get the settings associated to the bioequivalence estimation.
Description
Get the settings associated to the bioequivalence estimation. Associated settings are:
“level”  (int)  Level of the confidence interval 
“bioequivalenceLimits”  (vector)  Limit in which the confidence interval must be to conclude the bioequivalence is true 
“computedBioequivalenceParameters”  (data.frame)  Parameters to consider for the bioequivalence analysis and if they should be logtransformed (true/false). This list must be a subset of the NCA setting “computedncaparamteters”. 
“linearModelFactors”  (list)  The values are headers of the data set, except for reference where it must be one of the categories of the “formulation” categorical covariate. For “additional”, a vector of headers can be given. 
“degreesFreedom”  (string)  ttest using the residuals degrees of freedom assuming equal variances (“residuals”) or using the WelchSatterthwaite degrees of freedom assuming unequal variances (“WelchSatterthwaite”, default) 
“bedesign”  (string)  automatically recognize BE design “crossover” or “parallel” (cannot be changed) 
Usage
getBioequivalenceSettings(...)
Arguments
... 
[optional] (string) Name of the settings whose value should be displayed. If no argument is provided, all the settings are returned. 
Value
An array which associates each setting name to its current value.
See Also
Click here to see examples
getBioequivalenceSettings() # retrieve a list of all the bioequivalence methodology settings
getBioequivalenceSettings(“level”,”bioequivalenceLimits”) # retrieve a list containing only the value of the settings whose name has been passed in argument
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
[PKanalix] Get the settings associated to the compartmental analysis
Description
Get the settings associated to the compartmental analysis. Associated settings are:
“weightingCA”  (string)  Type of weighting objective function. 
“pool”  (logical)  Fit with individual parameters or with the same parameters for all individuals. 
“initialValues” (list)  list(param = value, …): value = initial value of individual parameter param.  
“blqMethod”  (string)  Method by which the BLQ data should be replaced. 
Usage
getCASettings(...)
Arguments
... 
[optional] (string) Name of the settings whose value should be displayed. If no argument is provided, all the settings are returned. 
Value
An array which associates each setting name to its current value.
See Also
Click here to see examples
getCASettings() # retrieve a list of all the CA methodology settings
getCASettings(“weightingca”,”blqmethod”) # retrieve a list containing only the value of the settings whose name has been passed in argument
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
[PKanalix] Get the data settings associated to the non compartmental analysis
Description
Get the data settings associated to the non compartmental analysis. Associated settings are:
“urinevolume”  (string)  regressor name used as urine volume. 
“datatype”  (list)  list(“obsId” = string(“plasma” or “urine”)). The type of data associated with each obsId: observation ID from data set. 
“units”  (list)  list with the units associated to “dose”, “time” and “volume”. 
“scalings”  (list)  list with the scaling factor associated to “concentration”, “dose”, “time” and “urinevolume”. 
“enableunits”  (bool)  are units enabled or not. 
Usage
getDataSettings(...)
Arguments
... 
[optional] (string) Name of the settings whose value should be displayed. If no argument is provided, all the settings are returned. 
Value
An array which associates each setting name to its current value.
See Also
Click here to see examples
getDataSettings() # retrieve a list of all the NCA methodology settings
getDataSettings(“urinevolume”) # retrieve a list containing only the value of the settings whose name has been passed in argument
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
[PKanalix] Get the global observation id used in both the compartmental and non compartmental analysis
Description
Get the global observation id used in both the compartmental and non compartmental analysis.
Usage
getGlobalObsIdToUse()
Value
the observation id used in computations.
See Also
Click here to see examples
getGlobalObsIdToUse() #
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
[PKanalix] Get the settings associated to the non compartmental analysis
Description
Get the settings associated to the non compartmental analysis. Associated settings are:
“administrationType”  (list)  list(key = “admId”, value = string(“intravenous” or “extravascular”)). admId Admninistration ID from data set or 1 if no admId column in the dataset. 
“integralMethod”  (string)  Method for AUC and AUMC calculation and interpolation. 
“partialAucTime”  (list)  The first element of the list is a bolean describing if this setting is used. The second element of the list is a list of the values of the bounds of the partial AUC calculation intervals. 
“blqMethodBeforeTmax”  (string)  Method by which the BLQ data before Tmax should be replaced. 
“blqMethodAfterTmax”  (string)  Method by which the BLQ data after Tmax should be replaced. 
“ajdr2AcceptanceCriteria”  (list)  The first element of the list is a boolean describing if this setting is used. The second element of the list is the value of the adjusted R2 acceptance criteria for the estimation of lambda_Z. 
“extrapAucAcceptanceCriteria”  (list)  The first element of the list is a boolean describing if this setting is used. The second element of the list is the value of the AUC extrapolation acceptance criteria for the estimation of lambda_Z. 
“spanAcceptanceCriteria”  (list)  The first element of the list is a boolean describing if this setting is used. The second element of the list is the value of the span acceptance criteria for the estimation of lambda_Z. 
“lambdaRule”  (string)  Main rule for the lambda_Z estimation. 
“timeInterval”  (vector)  Time interval for the lambda_Z estimation when “lambdaRule” = “interval”. 
“timeValuesPerId”  (list)  list(“idName” = idTimes,…): idTimes Observation times to use for the calculation of lambda_Z for the id idName. 
“nbPoints”  (integer)  Number of points for the lambda_Z estimation when “lambdaRule” = “points”. 
“maxNbOfPoints”  (list)  The first element of the list is a boolean describing if this setting is used. The second element of the list is the value maximum number of points to use for the lambda_Z estimation when “lambdaRule” = “R2” or “adjustedR2”. 
“startTimeNotBefore”  (list)  The first element of the list is a boolean describing if this setting is used. The second element of the list is the value minimum time value to use for the lambda_Z estimation when “lambdaRule” = “R2” or “adjustedR2”. 
“weightingNCA”  (string)  Weighting method used for the regression that estimates lambda_Z. 
“computedNCAParameters”  (vector)  All the parameters to compute during the analysis.” 
Usage
getNCASettings(...)
Arguments
... 
[optional] (string) Name of the settings whose value should be displayed. If no argument is provided, all the settings are returned. 
Value
An array which associates each setting name to its current value.
See Also
Click here to see examples
getNCASettings() # retrieve a list of all the NCA methodology settings
getNCASettings(“lambdaRule”,”integralMethod”) # retrieve a list containing only the value of the settings whose name has been passed in argument
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
[PKanalix] Set the value of one or several of the settings associated to the bioequivalence estimation
Description
Set the value of one or several of the settings associated to the bioequivalence estimation. Associated settings are:
“level”  (int)  Level of the confidence interval 
“bioequivalenceLimits”  (vector)  Limit in which the confidence interval must be to conclude the bioequivalence is true 
“computedBioequivalenceParameters”  (data.frame) Parameters to consider for the bioequivalence analysis and if they should be logtransformed (true/false). This list must be a subset of the NCA setting “computedncaparamteters”.  
“linearModelFactors”  (list)  The list can specify “id”, “period”, “formulation”, “sequence” and “additional”. The values are headers of the data set, except for reference where it must be one of the categories of the “formulation” categorical covariate. For “additional”, a vector of headers can be given. 
“degreesFreedom”  (string)  ttest using the residuals degrees of freedom assuming equal variances (“residuals”) or using the WelchSatterthwaite degrees of freedom assuming unequal variances (“WelchSatterthwaite”, default) 
Usage
setBioequivalenceSettings(...)
Arguments
... 
A collection of commaseparated pairs {settingName = settingValue}. 
See Also
Click here to see examples
setBioequivalenceSettings(level = 90, bioequivalencelimits = c(85, 115)) # set the settings whose name has been passed in argument
setBioequivalenceSettings(computedbioequivalenceparameters = data.frame(parameters = c(“Cmax”, “Tmax”), logtransform = c(TRUE, FALSE)))
setBioequivalenceSettings(linearmodelfactors = list(id=”SUBJ”, period=”OCC”, formulation=”FORM”, reference=”ref”, sequence=”SEQ”, additional=c(“Group”,”Phase”)))
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
[PKanalix] Get the settings associated to the compartmental analysis
Description
Set the settings associated to the compartmental analysis. Associated settings names are:
“weightingCA”  (string)  Type of weighting objective function. Possible methods are “uniform”, “Yobs”, “Ypred”, “Ypred2” or “Yobs2” (default). 
“pool”  (logical)  If TRUE, fit with individual parameters or with the same parameters for all individuals if FALSE. FALSE (default). 
“initialValues” (list)  list(param = value, …): value = initial value of individual parameter param.  
“blqMethod”  (string)  Method by which the BLQ data should be replaced. Possible methods are “zero”, “LOQ”, “LOQ2” or “missing” (default). 
Usage
setCASettings(...)
Arguments
... 
A collection of commaseparated pairs {settingName = settingValue}. 
See Also
Click here to see examples
setCASettings(weightingCA = “uniform”, blqMethod = “zero”) # set the settings whose name has been passed in argument
setCASettings(initialValues = list(Cl=0.4, V=.5, ka=0.04) # set the paramters CL, V, and ka to .4, .5 and .04 respectively
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
[PKanalix] Set the value of one or several of the data settings associated to the non compartmental analysis
Description
Set the value of one or several of the data settings associated to the non compartmental analysis. Associated settings names are:
“urinevolume”  (string)  regressor name used as urine volume. 
“datatype”  (list)  list(“obsId” = string(“plasma” or “urine”). The type of data associated with each obsId. Default "plasma". 
“units”  (list)  list with the units associated to “dose”, “time” and “volume”. 
“scalings”  (list)  list with the scaling factor associated to “concentration”, “dose”, “time” and “urinevolume”. 
“enableunits”  (bool)  are units enabled or not. 
Usage
setDataSettings(...)
Arguments
... 
A collection of commaseparated pairs {settingName = settingValue}. 
See Also
Click here to see examples
setDataSettings(“datatype” = list(“Y” =”plasma”)) # set the settings whose name has been passed in argument
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
[PKanalix] Set the global observation id used in both the compartmental and non compartmental analysis
Description
Get the global observation id used in both the compartmental and non compartmental analysis.
Usage
setGlobalObsIdToUse(...)
Arguments
... 
("id" string) the observation id from data section to use for computations. 
See Also
Click here to see examples
setGlobalObsIdToUse(“id”) #
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
[PKanalix] Set the value of one or several of the settings associated to the non compartmental analysis
Description
Set the value of one or several of the settings associated to the non compartmental analysis. Associated settings are:
“administrationType”  (list)  list(key = “admId”, value = string(“intravenous” or “extravascular”)). admId Admninistration ID from data set or 1 if no admId column in the dataset. 
“integralMethod”  (string)  Method for AUC and AUMC calculation and interpolation. 
“partialAucTime”  (list)  The first element of the list is a bolean describing if this setting is used. The second element of the list is a list of the values of the bounds of the partial AUC calculation intervals. By default, the boolean equals FALSE and the bounds are c(Inf, +Inf). 
“blqMethodBeforeTmax”  (string)  Method by which the BLQ data before Tmax should be replaced. Possible methods are “missing”, “LOQ”, “LOQ2” or “zero” (default). 
“blqMethodAfterTmax”  (string)  Method by which the BLQ data after Tmax should be replaced. Possible methods are “zero”, “missing”, “LOQ” or “LOQ2” (default). 
“ajdr2AcceptanceCriteria”  (list)  The first element of the list is a boolean describing if this setting is used. The second element of the list is the value of the adjusted R2 acceptance criteria for the estimation of lambda_Z. By default, the boolean equals FALSE and the value is 0.98. 
“extrapAucAcceptanceCriteria”  (list)  The first element of the list is a boolean describing if this setting is used. The second element of the list is the value of the AUC extrapolation acceptance criteria for the estimation of lambda_Z. By default, the boolean equals FALSE and the value is 20. 
“spanAcceptanceCriteria”  (list)  The first element of the list is a boolean describing if this setting is used. The second element of the list is the value of the span acceptance criteria for the estimation of lambda_Z. By default, the boolean equals FALSE and the value is 3. 
“lambdaRule”  (string)  Main rule for the lambda_Z estimation. Possible rules are “R2”, “interval”, “points” or “adjustedR2” (default). 
“timeInterval”  (vector)  Time interval for the lambda_Z estimation when “lambdaRule” = “interval”. This is a vector of size two, default = c(inf, inf) 
“timeValuesPerId”  (list)  list(“idName” = idTimes,…): idTimes Observation times to use for the calculation of lambda_Z for the id idName. Default = NULL, all the times values are used. 
“nbPoints”  (integer)  Number of points for the lambda_Z estimation when “lambdaRule” = “points”. Default = 3. 
“maxNbOfPoints”  (list)  The first element of the list is a boolean describing if this setting is used. The second element of the list is the value maximum number of points to use for the lambda_Z estimation when “lambdaRule” = “R2” or “adjustedR2”. By default, the boolean equals FALSE and the value is inf. 
“startTimeNotBefore”  (list)  The first element of the list is a boolean describing if this setting is used. The second element of the list is the value minimum time value to use for the lambda_Z estimation when “lambdaRule” = “R2” or “adjustedR2”. By default, the boolean equals FALSE and the value is 0. 
“weightingNca”  (string)  Weighting method used for the regression that estimates lambda_Z. Possible methods are “Y”, “Y2” or “uniform” (default). 
“computedNCAParameters”  (vector)  All the parameters to compute during the analysis.” 
Usage
setNCASettings(...)
Arguments
... 
A collection of commaseparated pairs {settingName = settingValue}. 
See Also
Click here to see examples
setNCASettings(integralMethod = “LinLogTrapLinLogInterp”, weightingnca = “uniform”) # set the settings whose name has been passed in argument
setNCASettings(administrationType = list(“1″=”extravascular”)) # set the administration id “1” to extravascular
setNCASettings(startTimeNotBefore = list(TRUE, 15)) # set the estimation of the lambda_z with points with time over 15
setNCASettings(timeValuesPerId = list(‘1’=c(4, 6, 8, 30), ‘4’=c(8, 12, 18, 24, 30))) # set the points to use for the lambda_z to time={4, 6, 8, 30} for id ‘1’ and ime={8, 12, 18, 24, 30} for id ‘4’
setNCASettings(timeValuesPerId = NULL) # set the points to use for the lambda_z to the default rule
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
5.7.Description of the R functions associated to PKanalix preferences and project settings
Description of the functions of the API
getPreferences  Get a summary of the project preferences. 
getProjectSettings  Get a summary of the project settings. 
setPreferences  Set the value of one or several of the project preferences. 
setProjectSettings  Set the value of one or several of the settings of the project. 
[Monolix – PKanalix – Simulx] Get project preferences
Description
Get a summary of the project preferences. Preferences are:
“relativepath”  (bool)  Use relative path for save/load operations. 
“threads”  (int >0)  Number of threads. 
“timestamping”  (bool)  Create an archive containing result files after each run. 
“delimiter”  (string)  Character use as delimiter in exported result files. 
“exportchartsData”  (bool)  Should graphics data be exported. 
“exportsimulationfiles”  (bool)  [Simulx] Should simulation results files be exported. 
“headeraliases”  (list("header" = vector<string>))  For each header, the list of the recognized aliases. 
“ncaparameters”  (vector<string>)  [PKanalix] Defaulty computed NCA parameters. 
Usage
getPreferences(...)
Arguments
... 
[optional] (string) Name of the preference whose value should be displayed. If no argument is provided, all the preferences are returned. 
Value
An array which associates each preference name to its current value.
Click here to see examples
getPreferences() # retrieve a list of all the general settings
getPreferences(“imageFormat”,”exportCharts”)
# retrieve only the imageFormat and exportCharts settings values
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
[Monolix – PKanalix – Simulx] Get project settings
Description
Get a summary of the project settings.
Associated settings for Monolix projects are:
“directory”  (string)  Path to the folder where simulation results will be saved. It should be a writable directory. 
“exportResults”  (bool)  Should results be exported. 
“seed”  (0< int <2147483647)  Seed used by random generators. 
“grid”  (int)  Number of points for the continuous simulation grid. 
“nbSimulations”  (int)  Number of simulations. 
“dataAndModelNextToProject”  (bool)  Should data and model files be saved next to project. 
Associated settings for PKanalix projects are:
“directory”  (string)  Path to the folder where simulation results will be saved. It should be a writable directory. 
“dataNextToProject”  (bool)  Should data files be saved next to project. 
Associated settings for Simulx projects are:
“directory”  (string)  Path to the folder where simulation results will be saved. It should be a writable directory. 
“seed”  (0< int <2147483647)  Seed used by random generators. 
“userfilesnexttoproject”  (bool)  Should user files be saved next to project. 
Usage
getProjectSettings(...)
Arguments
... 
[optional] (string) Name of the settings whose value should be displayed. If no argument is provided, all the settings are returned. 
Value
An array which associates each setting name to its current value.
See Also
Click here to see examples
getProjectSettings() # retrieve a list of all the project settings
getProjectSettings(“directory”,”seed”)
# retrieve only the directopry and the seed settings values
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
[Monolix – PKanalix – Simulx] Set preferences
Description
Set the value of one or several of the project preferences. Prefenreces are:
“relativepath”  (bool)  Use relative path for save/load operations. 
“threads”  (int >0)  Number of threads. 
“timestamping”  (bool)  Create an archive containing result files after each run. 
“delimiter”  (string)  Character use as delimiter in exported result files. 
“exportchartsData”  (bool)  Should graphics data be exported. 
“exportsimulationfiles”  (bool)  [Simulx] Should simulation results files be exported. 
“headeraliases”  (list("header" = vector<string>))  For each header, the list of the recognized aliases. 
“ncaparameters”  (vector<string>)  [PKanalix] Defaulty computed NCA parameters. 
Usage
setPreferences(...)
Arguments
... 
A collection of commaseparated pairs {preferenceName = settingValue}. 
See Also
Click here to see examples
setPreferences(exportCharts = FALSE, delimiter = “,”)
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
[Monolix – PKanalix – Simulx] Set project settings
Description
Set the value of one or several of the settings of the project.
Associated settings for Monolix projects are:
“directory”  (string)  Path to the folder where simulation results will be saved. It should be a writable directory. 
“exportResults”  (bool)  Should results be exported. 
“seed”  (0< int <2147483647)  Seed used by random generators. 
“grid”  (int)  Number of points for the continuous simulation grid. 
“nbSimulations”  (int)  Number of simulations. 
“dataAndModelNextToProject”  (bool)  Should data and model files be saved next to project. 
Associated settings for PKanalix projects are:
“directory”  (string)  Path to the folder where simulation results will be saved. It should be a writable directory. 
“dataNextToProject”  (bool)  Should data files be saved next to project. 
Associated settings for Simulx projects are:
“directory”  (string)  Path to the folder where simulation results will be saved. It should be a writable directory. 
“seed”  (0< int <2147483647)  Seed used by random generators. 
“userfilesnexttoproject”  (bool)  Should user files be saved next to project. 
Usage
setProjectSettings(...)
Arguments
... 
A collection of commaseparated pairs {settingName = settingValue}. 
See Also
Click here to see examples
setProjectSettings(directory = “/path/to/export/directory”, seed = 12345)
## End(Not run)
Top of the page, PKanalix API, Monolix API, Simulx API.
6.FAQ
This page summarizes the frequent questions about PKanalix.
 Resolution and display
 Regulatory
 Running PKanalix
 Input data
 Settings (options)
 Settings (output results)
 Results
 Share your feedback
Resolution and display
 OpenGL technology impact on remote access: the PKanalix interface uses OpenGL technology. Unfortunately, remote access using direct rendering is not compatible with OpenGL, as the OpenGL application sends instructions directly to the local hardware bypassing the target X server. As a consequence, PKanalix cannot be used with X11 forwarding. Instead, an indirect rendering should be used, where the remote application sends instructions to the X server which transfers them to the graphics card. It is possible to do that with ssh application, but it requires a dedicated configuration depending on the machine and the operating system. Other applications such as VNC or Remina can also be used for an indirect rendering.
 If the graphical user interface appears with too high or too low resolution, follow these steps:
 open PKanalix
 load any project from the demos
 in the menu, go to Settings > Preferences and disable the “High dpi scaling” in the Options.
 close PKanalix
 restart PKanalix
Regulatory
 Are NCA and CA analyses done with PKanalix accepted by the regulatory agencies like the FDA and EMA? Yes.
 How to cite PKanalix? Please reference it as here
PKanalix version 2021R2. Antony, France: Lixoft SAS, 2021.
Running PKanalix
 On what operating systems does PKanalix run? PKanalix runs on Windows, Linux and MacOS platform.
 Is it possible to run PKanalix in command line? It is possible to run PKanalix from the R command line. A full R api providing the full flexibility on running and modifying PKanalix projects is described here
Input data
 Does PKanalix support sparse data? No.
 Does PKanalix support drugeffect or PD models? No.
 What type of data can PKanalix handle? Extravascular, intravascular infusion, intravascular bolus for singledose or steadystate plasma concentration and singledose urine data can be used. See here.
 Can I give the concentration data and dosing data as separate files? No.
 Can I give the dosing information directly via the interface? No.
 Can I have BLQ data? Yes, see here.
 Can I define units? Yes, see here.
 Can I define variables such as “Sort” and “Carry”? Yes, check here.
 It is possible to define several dosing routes within a single data set? Yes, check the ADMINISTRATION ID column.
 Can I use dose normalization to scale the dose by a factor? No, this must be done before using PKanalix.
Settings (options)
 How do I indicate the type of model/data? Extravascular versus intravascular is set in the Settings window. Single versus steadystate and infusion versus bolus are imputed based on the data set columntypes.
 Can I exclude data with insufficient data? The “Acceptance criteria” settings allow the user to define acceptance thresholds. In the output tables, each individual is flagged according those criteria. The flags can be used to filter the results outside PKanalix.
 Can I set the data as being sparse? No, sparse data calculations are not supported.
 Which options are available for the lambda_z calculation? Points to be included in the lambda_z calculation can be defined using the adjusted R2 criteria (called best fit in Winonlin), the R2 criteria, a time range or a number of points. In addition, the a specific range per individual can be defined, as well as points to include or exclude. Check the lambda_z options here and here.
 Can I define several partial areas? Yes.
 Can I save settings as template for future projects? Not for settings. However the user can set the data set headers that should be recognized automatically in “Settings > Preferences”.
 Do you have a “slope selector”? Yes, check here for information on the “Check lambda_z“.
 Can I define a therapeutic response range? No.
 Can I set a userdefined weighting for the lambda_z? No. But most common options are available as a setting.
 Can I disable the calculation of lambda_z (curve stripping in Winnonlin)? Set to general rule to a time interval that does not contain any point. For a single individual, select a single point. This will lead to a failure of the lambda_z calculation and parameters relying on lambda_z will not be calculated.
Settings (output results)
 Can I change the name of the parameters? No. However the output files contain both PKanalix names and CDISC names.
 Can I export the result table? The result tables are automatically saved as text files in the result folder. In addition, result tables can be copypasted to Excel or Word using the copy button on the top right of the tables.
 Can I generate a report? No. But the result tables can be copypasted to Excel or Word using the copy button on the top right of the tables. Plots can be exported as images using in the menu “Export > Export plots” or by clicking on save button at the top of each plot.
 Can I choose which parameters to show in the result table and to export to the result folder file? Yes, see here.
 Are the calculation rules the same as in Winonlin? Yes.
 Can I define myself the result folder? By default, the result folder corresponds to the project name. However, you can define it by yourself. See here to see how to define it on the user interface.
Results
 What result files are generated by PKanalix?
 Can I replot the plots using another plotting software? Yes, if you go to the menu Export and click on “Export charts data”, all the data needed to reproduce the plots are stored in text files.
 When I open a project, my results are not loaded (message “Results have not been loaded due to an old inconsistent project”). Why? When loading a project, PKanalix checks that the project (i.e all the information saved in the .pkx file) being loaded and the project that has been used to generate the results are the same. If not, the error message is shown. In that case the results will not be loaded because they are inconsistent with the loaded project.
Share your feedback
 I would like to give a suggestion for the future versions of PKanalix. Share your feedback!
 I need more details in the documentation for a specific feature. Share your feedback!
7.Case studies
Different cases studies are presented here to show how to use Monolix and the MonolixSuite for Modeling and Simulation.
Warfarin case study
This video case study shows a simple PK modeling workflow in Monolix2018, with the example of warfarin. It explains the main features and algorithms of Monolix, that guide the iterative process of model building: from validating the structural model to adjusting the statistical model stepbystep.
It includes picking a model from the libraries, choosing initial estimates with the help of population predictions, estimating parameters and uncertainty, and diagnosing the model with interactive plots and statistical tests.
Tobramycin case study
The case study is presented in 5 sequential parts, that we recommend to read in order: Part 1: Introduction, Part 2: Data visualization with Datxplore, Part 3: Model development with Monolix, Part 4: Model exploration with Mlxplore, and Part 5: Dosing regimen simulations with Simulx.
Remifentanil case study
This casestudy shows how to use Monolix to build a population pharmacokinetic model for remifentanil in order to determine the influence of subject covariates on the individual parameters.
Link to Remifentanil case study
Longitudinal ModelBased MetaAnalysis (MBMA) with Monolix Suite
Longitudinal modelbased metaanalysis (MBMA) models can be implemented using the MonolixSuite. These models use studylevel aggregate data from the literature and can usually be formulated as nonlinear mixedeffects models in which the interarm variability and residual error are weighted by the number of individuals per arm. We exemplify the model development and analysis workflow of MBMA models in Monolix using a real data set for rheumatoid arthritis, following publication from Demin et al (2012). In the case study, the efficacy of a drug in development (Canakinumab) is compared to the efficacy of two drugs already on the market (Adalimumab and Abatacept). Simulations using Simulx were used for decision support to see if the new drug has a chance to be a better drug.
Link to MBMA case study
Analysis of timetoevent data
Within the MonolixSuite, the mlxtran language allows to describe and model timetoevent data using a parametric approach. This page provides an introduction on timetoevent data, the different ways to model this kind of data, and typical parametric models. A library of common TTE models is also provided.
Two modeling and simulation workflows illustrate this approach, using two TTE data sets:
Veralipride case study
Multiple peaking in plasma concentrationtime curves is not uncommon, and can create difficulties in the determination of pharmacokinetic parameters.
For example, double peaks have been observed in plasma concentrations of veralipride after oral absorption. While multiple peaking can be explained by different physiological processes, in this case sitespecific absorption has been suggested to be the major mechanism. In this webinar we explore this hypothesis by setting up a population PK modeling workflow with the MonolixSuite 2018.
The stepbystep workflow includes visualizing the data set to characterize the double peaks, setting up and estimating a double absorption model, assessing the uncertainty of the parameter estimates to avoid overparameterization, and simulations of the model.