linear_filter()
takes floating point errors into account when checking whether the alpha values sum to 1.get_kernel
is renamed get_kernelmatrix
. the function get_kernel
is deprecated.tskrrHomogenous
and dependent classes are now called tskrrHomogeneous
. The same correction is done for tskrrHeterogenous
to tskrrHeterogeneous
. This might affect code that uses get_loo_fun
based on the class name.tskrrHomogeneousImpute
and tskrrHeterogeneousImpute
were renamed to tskrrImputeHomogeneous
and tskrrImputeHeterogeneous
to follow the naming convention for the classes.permtest
class now has getters that allow to extract the information from the test."edges"
and "vertices"
for the settings "interaction"
and "both"
respectively. These give the same results, and make it more clear what actually happens. This is adapted in functions loo()
, get_loo_fun()
, tune()
and those dependent on it.g
matrix in predict()
now expects the new nodes to be on the rows.permtest
function is added.K
and G
for the function predict()
have been renamed k
and g
(lower case).loo
now adds the labels to the output (except for linear filters)tune
now allows for a one-dimensional grid search for heterogenous networks. Set onedim = TRUE
to avoid a full grid search.has_onedim
tells whether the grid search was one dimensional or not. This is a getter for the appropriate slote in the tskrrTune class.plot_grid
allows you to plot the loss in function of the searched grid after tuning a model. It deals with both 1D and 2D grids and can be used for quick evaluation of the optimal lambda values.residuals
allows you to calculate the residuals based on the predictions or on the loo values of choice.plot
method available now for tskrr
objects. It allows to plot fitted values, residuals, original response and the results of different loo settings, together with dendrograms based on the kernel matrices.predict
didn’t give correct output when only g
was passed. fixed.colnames
didn’t get the correct labels for homogenous networksimpute_loo
is removed from the package.eigen2hat
, eigen2map
and eigen2matrix
had the second argument renamed from vec
to val
. The old name implied that the second argument took the vectors, which it doesn’t!tskrrImpute
virtual class is added to represent imputed models.is_symmetric
didn’t take absolute values to compare. Fixed.show
methods for objects are cleaned up.predict
gave nonsensical output. Fixed.valid_labels
now requires the K and G matrices to have the same ordering of row and column names. Otherwise the matrix wouldn’t be symmetric and can’t be used.linear_filter
now forces the alphas to sum up to 1.tune
now returns an object of class tskrrTuneHomogenous
or tskrrTuneHeterogenous
.tskrrTune
provides a more complete object with all information of tuning. It is a superclass with two real subclasses, tskrrTuneHeterogenous
and tskrrTuneHomogenous
.tune
now allows to pass the matrices directly so you don’t have to create a model with tskrr
first.linear_filter
gave totally wrong predictions due to a code error: fixed.
linear_filter
returned a matrix when NAs were present: fixed.
fitted
now has an argument labels
which allows to add the labels to the returned object.
tskrr
now returns an error if the Y matrix is not symmetric or skewed when fitting a homogenous network.
labels
now produces more informative errors and warnings.
In the testing procedures
input testing for tskrr
moved to its own function and is also used by impute_tskrr
now.
tskrr
, tskrrHeterogenous
and tskrrHomogenous
:
has.orig
has been removed as it doesn’t make sense to keep the original kernel matrices. It is replaced by a slot has.hat
allowing to store the hat matrices.k.orig
and g.orig
have been replaced by the slots Hk
and Hg
to store the hat matrices. These are more needed for fitting etc.has_original
has been removed and replaced by has_hat
keep
of the function tskrr
now stores the hat matrices instead of the original kernel matrices.tskrr
has lost its argument homogenous
. It didn’t make sense to set that by hand.tskrrHeterogenousImpute
and tskrrHomogenousImpute
are added to allow for storing models with imputed predictions.get_loo_fun()
:
homogenous
removed in favor of x
. This allows for extension of the function based on either an object or the class of that object.x
becomes the first argument.linear_filter
that fits a linear filter over an adjacency matrix. This function comes with a class linearFilter
.tune()
has a new argument fun
that allows to specify a function for optimization.loss_mse()
and loss_auc()
are provided for tuning.update()
allows to retrain the model with new lambdas.tune()
: fixed.tune()
: fixed.