This is the vignette for the R-Package recordSwapping, which can be used to apply the record swapping algorithm to a micro data set. The implementation of the procedure was done purely in C++ and is based on the SAS code on targeted record swapping from ONS (https://ec.europa.eu/eurostat/cros/content/2-record-swapping_en). There are, however, substantial differences between the SAS and C++ Code. Some of these differences are the result of improving the run-time for the C++ implementation. In the next section, the differences between the 2 implementations are presented further. The R-Package is just as a front end to easily call the procedures and for testing purposes.
The targeted record swapping can be applied with the function
recordSwap()
. All other functions in the package are called
from inside recordSwap()
and are only exported for testing
purposes. The function has the following arguments:
(std::vector< std::vector<int> > data, int hid,
recordSwapstd::vector<int> hierarchy,
std::vector<std::vector<int>> similar,
double swaprate,
std::vector<std::vector<double>> risk, double risk_threshold,
int k_anonymity, std::vector<int> risk_variables,
std::vector<int> carry_along,
int &count_swapped_records,
int &count_swapped_hid,
std::string log_file_name,
int seed = 123456)
risk[0]
corresponds to
the risks of the first record for all hierarchy levels,
risk[1]
for the second record and so on. Should be ignored,
for now, it is not fully tested yet.k_anonymity > counts
.risk
was not supplied.carry_along
should be at
household level. In case it is detected that they are at individual
level (different values within hid
), a warning is given
because unexpected results may occur since the first observed value
within the Household for any variable is used for all members in the
swapped household.hid
) that could not
have been swapped and is only created if any such households exist.IMPORTANT: The argument data
contains
the micro data and can be understood as a vector of vectors. Inside the
function, data
is expected to contain each column of the
input data as std::vector<int>
which are then again
stored in an std::vector< std::vector<int> >
.
So data[0]
addresses variables of the first record and
data[0][0]
the first column of the first record. The same
logic hold for the argument risk
.
hierarchy
) and the combination of
all risk variables.hierarchy
) and risk variable
separately. These risks are then combined to produce a single risk value
for each record.\[ r_{i,h} = (\sum\limits_{j=1}^{N_{g_1}}1[v_{1(i)}=v_{1(j)} \land ... \land v_{p(i)}=v_{p(j)}])^{-1} \quad , \]
with \(v_1,\ldots,v_p\) as a set of risk variables, \(N_{g_1}\) as the number of persons living in region \(g_1\) and \(1[...]\) as the indicator function. \(1[v_{1(i)}=v_{1(j)} \land ... \land v_{p(i)}=v_{p(j)}]\) is 1 if individual \(j\) as the same values for risk variables \(v_1,\ldots,v_p\) as individual \(i\) and is 0 otherwise. Casually speaking
\[ r_{i,h} \sim \frac{1}{counts} \quad . \]
The sampling probability for household \(y\), \(p_{y,h}\) in hierarchy \(h\), is then defined by the maximum of risk across all household member
\[ p_{y,h} = \max_{i\text{ in household }y}(r_{i,h}) \quad . \] This sampling probability is used for selecting households for swapping as well as donor households.
\[ p_i = \begin{cases} 0.999 \quad \text{for low risk household} \\ \frac{b\cdot N_{high}}{SA\cdot c-b\cdot c} \quad \text{for }b>0 \\ \frac{0.2\cdot N_{high}}{SA\cdot c-0.2\cdot c} \quad \text{for }b=0 \\ \frac{0.1\cdot N_{high}}{SA\cdot c-0.1\cdot c} \quad \text{for }b<0 \end{cases} \] where \[ b = SA - N_{high}\\ c = N_{netto} - N_{high} \] with \(SA\) as the sample size, \(N_{high}\) as the number of high risk households in the geographic area and \(N_{netto}\) the number of non-imputed records in the geographic area.
swaprate
. If the proportion of
already swapped households succeeds these values then records that do
not fulfil the k-anonymity are also swapped.Figure 1 displays an example with hierarchy levels NUTS1 > NUTS2 > NUTTS3 where the numbers of high risk households are displayed at the end of the edges. For instance, in the first NUTS1 region, there are 5 high risk households that will be swapped with households from other NUTS1 regions. In the first NUTS2 region, there are 10 high risk households that will be swapped with households that are not in the same NUTS2 region. At the lowest level, the NUTS3 regions, the number of swaps, \(n_{swaps}\) for the first district is defined by
\[ n_{swaps} = 2 + Rest\\ Rest = \max(0,N\cdot s - n_{already}) \] with \(N\) as the number of households in the district, \(n_{already}\) as the number of already swapped households in the district and \(s\) as the swap rate.
\[ SWAP_m = \frac{SIZE_m+RISK_m}{2} \] where \(SIZE_m\) can be derived by using the reciprocal number of households of each municipality in county \(n\).
\[ SIZE_m = \frac{N_m^{-1}}{\sum_iN_i^{-1}}\cdot sN_n \] with \(N_m\) as the number of households in municipality \(m\), \(s\) the global swaprate and \(N_n\) as the number of households in county \(n\). \(RISK_m\) can be derived using the proportion of high risk households in each municipality
\[ RISK_m = \frac{H_m}{\sum_iH_i}\cdot sN_n \]
with \(H_m\) as the proportion of high risk households in municipality \(m\).
The package was tested on randomly generated data, which contained 5 geographic levels and some other sociodemographic variables.
library(sdcMicro)
<- createDat(N=100000)
dat dat
## nuts1 nuts2 nuts3 lau2 hid hsize ageGroup gender national htype
## 1: 1 12 1210 12104 1 6 4 2 2 6
## 2: 1 12 1210 12104 1 6 6 1 3 6
## 3: 1 12 1210 12104 1 6 6 2 3 6
## 4: 1 12 1210 12104 1 6 5 2 1 6
## 5: 1 12 1210 12104 1 6 4 1 1 6
## ---
## 349731: 3 31 3115 31155 99998 4 6 1 4 10
## 349732: 2 23 2311 23112 99999 1 7 2 1 10
## 349733: 3 32 3206 32065 100000 3 3 2 4 3
## 349734: 3 32 3206 32065 100000 3 7 2 2 3
## 349735: 3 32 3206 32065 100000 3 2 2 2 3
## hincome
## 1: 8
## 2: 8
## 3: 8
## 4: 8
## 5: 8
## ---
## 349731: 5
## 349732: 4
## 349733: 4
## 349734: 4
## 349735: 4
Applying the record swapping to dat could look like this
<- c("nuts1","nuts2")
hierarchy <- c("hincome","ageGroup","gender")
risk_variables <- 3
k_anonymity <- .05
swaprate <- "hid"
hid <- "hsize"
similar
<- recordSwap(data = dat, hid = hid,
dat_swapped hierarchy = hierarchy,
similar = similar,
risk_variables = risk_variables,
k_anonymity = k_anonymity,
swaprate = swaprate)
## Recordswapping was successful!
dat_swapped
Here the procedure was applied to dat
nuts1
and
nuts2
hsize
as the similarity variable (so only
households with the same household size are swapped)hIncome
, ageGroup
,
gender
as risk variablesIf k_anonymity <- 0
only the swaprate is considered.
Then at most th*100
% of the households are swapped. If the
sample is very small, the actual number of swaps can be smaller,
however, this can only happen if some regions have a very small number
of households, e.g. 1,2,3,…
<- 0
k_anonymity <- .05
swaprate <- recordSwap(data = dat, hid = hid,
dat_swapped hierarchy = hierarchy,
similar = similar,
risk_variables = risk_variables,
k_anonymity = k_anonymity,
swaprate = swaprate)
## Recordswapping was successful!
dat_swapped
Comparing number of swapped households
<- merge(dat[,.(paste(nuts1[1],nuts2[1])),by=hid],
dat_compare paste(nuts1[1],nuts2[1])),by=hid],by="hid")
dat_swapped[,.(
# number of swapped households
nrow(dat_compare[V1.x!=V1.y])
## [1] 5000
# swaprate times number of households in data
uniqueN(hid)]*swaprate dat[,
## [1] 5000
Instead of column names, index vectors can be supplied for parameters
hid
, hierarchy
, similar
and
risk_variables
.
<- c(1,2) # ~ c("nuts1","nuts2")
hierarchy <- c(11,7,8) # ~ c("hincome","ageGroup","gender")
risk_variables <- 5 # ~ "hid"
hid <- 6 # ~ "hsize"
similar
<- recordSwap(data = dat, hid = hid,
dat_swapped hierarchy = hierarchy,
similar = similar,
risk_variables = risk_variables,
k_anonymity = k_anonymity,
swaprate = swaprate)
## Recordswapping was successful!
Please note that the underlying c++
-routines expect
indices starting from 0 but in R
indices start with 1. The
wrapper function recordSwap()
converts indices or column
names in R
into the correct format for the c++
routines. So using column indices for this function call should be done
in the usually R
-fashion where indices start with 1.
In some cases, the condition of finding a similar household
given by the parameter similarity
might be too strict. And
thus, it is not possible to swap the necessary number of households due
to the lack of a suitable donor household.
# demonstrate on small data set
<- createDat(N=10000)
dat <- c("nuts1","nuts2")
hierarchy <- "gender"
risk_variables # similarity profile contains:
# nuts1 + hsize + htype + hincome
<- c("nuts1","hsize","htype","hincome")
similar
# procedure will not always find a suitable donor
<- recordSwap(data = dat, hid = hid,
dat_swapped hierarchy = hierarchy,
similar = similar,
risk_variables = risk_variables,
k_anonymity = 3,
swaprate = 0.05,
seed = 123456L)
## Donor household was not found in 5 case(s).
## See TRS_logfile.txt for a detailed list
The expected number of swapped households for a population of 10000 households and a swapping rate of 0.05 is 500. The actual number of swaps was however:
<- merge(dat[,.(paste(nuts1[1],nuts2[1])),by=hid],
dat_compare paste(nuts1[1],nuts2[1])),by=hid],by="hid")
dat_swapped[,.(
# number of swapped households
nrow(dat_compare[V1.x!=V1.y])
## [1] 490
With the parameter similar
multiple similarity profiles
can be defined, if the parameter input is a list
. If a
donor could not be found for the first similarity profile
(similar[[1]]
) then a donor is searched for using the next
similarity profile (similar[[2]]
) and so on.
Using multiple similarity profiles makes it easy to supply fall-back profiles if the initial profile is too specific.
# additional profile contains only hsize
<- list(similar)
similar 2]] <- "hsize"
similar[[ similar
## [[1]]
## [1] "nuts1" "hsize" "htype" "hincome"
##
## [[2]]
## [1] "hsize"
# procedure found donors for every record
<- recordSwap(data = dat, hid = hid,
dat_swapped hierarchy = hierarchy,
similar = similar,
risk_variables = risk_variables,
k_anonymity = 3,
swaprate = 0.05,
seed = 123456L)
## Recordswapping was successful!
<- merge(dat[,.(paste(nuts1[1],nuts2[1])),by=hid],
dat_compare paste(nuts1[1],nuts2[1])),by=hid],by="hid")
dat_swapped[,.(
# number of swapped households
nrow(dat_compare[V1.x!=V1.y])
## [1] 500
Using the function recordSwap()
like above always
results in swapping the variables defined through variable
hierarchy
. Sometimes it might be useful to swap more
variables than the ones stated in hierarchy.
When we apply the record swapping using hierarchy
-levels
nuts1
and nuts2
then the nuts3
and lau2
-variable in our data set will stay unchanged. Thus
for the resulting data set, the variables nuts1
and
nuts2
are no longer coherent with nuts3
and
lau2
.
Let’s have a more detailed look at the problem
<- "hid"
hid <- c("nuts1","nuts2")
hierarchy <- c("hsize")
similar <- c("hincome","htype")
risk_variables
<- recordSwap(data = copy(dat),
dat_swapped hid = hid,
hierarchy = hierarchy,
similar = similar,
risk_variables = risk_variables,
swaprate = 0.05,
seed=1234L)
## Recordswapping was successful!
# compare results
<- merge(dat[,.(paste(nuts1[1],nuts2[1])),by=hid],
dat_compare paste(nuts1[1],nuts2[1])),by=hid],by="hid")
dat_swapped[,.(head(dat_compare[V1.x!=V1.y])
## hid V1.x V1.y
## 1: 9 3 34 2 24
## 2: 37 2 23 2 24
## 3: 77 3 33 3 31
## 4: 83 3 35 1 13
## 5: 84 2 24 2 22
## 6: 85 2 25 2 24
For nuts1==1
and nuts2==14
the
nuts3
variables takes on values
==1&nuts2==14,sort(unique(nuts3))] dat[nuts1
## [1] 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415
In the swapped data set there are however many more values for
nuts3
now which are not coherent with nuts1==1
and nuts2==14
==1&nuts2==14,sort(unique(nuts3))] dat_swapped[nuts1
## [1] 1106 1107 1108 1111 1302 1306 1308 1315 1401 1402 1403 1404 1405 1406 1407
## [16] 1408 1409 1410 1411 1412 1413 1414 1415 1501 1504 1506 1508 1511 1513 1514
## [31] 2109 2111 2206 2211 2214 2302 2310 2408 2413 2503 2504 2510 2513 2515 3112
## [46] 3202 3205 3210 3215 3307 3313 3402 3514
Using the parameter carry_along
one can define certain
variables which are additionally swapped but do not interfere with the
risk calculation, sampling and procedure of finding a donor.
<- recordSwap(data = copy(dat),
dat_swapped2 hid = hid,
hierarchy = hierarchy,
similar = similar,
risk_variables = risk_variables,
swaprate = 0.05,
carry_along = c("nuts3","lau2"), # <- swap nuts3 and lau2 variable as well
seed=1234L)
## Recordswapping was successful!
<- c("nuts1", "nuts2", "nuts3")
geoVars <- dat[!duplicated(hid),..geoVars]
dat_geo setorderv(dat_geo,geoVars)
<- dat_swapped2[!duplicated(hid),..geoVars]
dat_geo_swapped setorderv(dat_geo_swapped,geoVars)
# check if value combinations of swapped and original data are the same
all.equal(dat_geo,dat_geo_swapped)
## [1] TRUE
Both the original and swapped data set have the same value
combinations for nuts1
, nuts2
and
nuts3
. Setting this parameter did, however, not interfere
with the swapping procedure
<- merge(dat[,.(paste(nuts1[1],nuts2[1])),by=hid],
dat_compare2 paste(nuts1[1],nuts2[1])),by=hid],by="hid")
dat_swapped[,.(
# check if same hid were swapped in both cases
all.equal(dat_compare2[order(hid),.(hid)],
order(hid),.(hid)]) dat_compare[
## [1] TRUE
Using the same idea one can set return_swapped_id = TRUE
to return the hid
with which the records were swapped
with.
<- recordSwap(data = copy(dat),
dat_swapped3 hid = hid,
hierarchy = hierarchy,
similar = similar,
risk_variables = risk_variables,
swaprate = 0.05,
carry_along = "nuts3",
return_swapped_id = TRUE,
seed=1234L)
## Recordswapping was successful!
The output now has an additional column named
hid_swapped
, which contains the household ID with which a
household was swapped with.
Number of swapped hid
s
!duplicated(hid),.N,by=.(id_swapped = hid!=hid_swapped)] dat_swapped3[
## id_swapped N
## 1: FALSE 9500
## 2: TRUE 500
With the function infoLoss()
one can calculate various
information loss measures over a pre defined frequency table. This
frequency table is defined by the parameter table_vars
,
which accepts column names of the original and swapped micro data. The
frequency table is internally constructed using both the original and
swapped micro data. Afterwards, various information loss measures are
estimated over each of the table cells.
# calculate information loss for frequecy table nuts2 x national
<- c("nuts2","national")
table_vars <- infoLoss(data=dat, data_swapped = dat_swapped3,
iloss table_vars = table_vars)
$measures iloss
## what absD abssqrtD relabsD
## 1: Min 0.000000 0.00000000 0.000000000
## 2: 10% 0.000000 0.00000000 0.000000000
## 3: 20% 0.000000 0.00000000 0.000000000
## 4: 30% 1.000000 0.02344338 0.002195981
## 5: 40% 2.000000 0.04573896 0.004175365
## 6: Mean 3.578947 0.08273530 0.007657521
## 7: Median 3.000000 0.07122707 0.006741573
## 8: 60% 4.000000 0.09170863 0.008375218
## 9: 70% 5.000000 0.11747533 0.011077883
## 10: 80% 7.000000 0.15840579 0.014241102
## 11: 90% 8.600000 0.19352072 0.017585059
## 12: 95% 10.000000 0.22631982 0.020661157
## 13: 99% 11.000000 0.25959760 0.024722442
## 14: Max 11.000000 0.26235645 0.024774775
Per default the absolute deviation (\(abs(x,y)\)), relative absolute deviation
(\(r\_abs(x,y)\)), and absolute
deviaion of square roots (\(abs\_sqr(x,y)\)) is calculated between the
table cells x and y, see also parameter metric
.
\[ abs(x,y) = |x-y| \]
\[ r\_abs(x,y) = \frac{|x-y|}{x} \]
\[ abs\_sqr(x,y) = |\sqrt{x}-\sqrt{y}| \]
It is also possible to supply a custom information loss metric by
using parameter custom_meric
# define squared distance as custom metric
<- function(x,y){
squareD -y)^2
(x
}
<- infoLoss(data=dat, data_swapped = dat_swapped3,
iloss table_vars = c("nuts2","national"),
custom_metric = list(squareD=squareD))
$measures # includes custom loss as well iloss
## what absD abssqrtD relabsD squareD
## 1: Min 0.000000 0.00000000 0.000000000 0.00000
## 2: 10% 0.000000 0.00000000 0.000000000 0.00000
## 3: 20% 0.000000 0.00000000 0.000000000 0.00000
## 4: 30% 1.000000 0.02344338 0.002195981 1.00000
## 5: 40% 2.000000 0.04573896 0.004175365 4.00000
## 6: Mean 3.578947 0.08273530 0.007657521 23.74737
## 7: Median 3.000000 0.07122707 0.006741573 9.00000
## 8: 60% 4.000000 0.09170863 0.008375218 16.00000
## 9: 70% 5.000000 0.11747533 0.011077883 25.00000
## 10: 80% 7.000000 0.15840579 0.014241102 49.00000
## 11: 90% 8.600000 0.19352072 0.017585059 74.20000
## 12: 95% 10.000000 0.22631982 0.020661157 100.00000
## 13: 99% 11.000000 0.25959760 0.024722442 121.00000
## 14: Max 11.000000 0.26235645 0.024774775 121.00000
The function recordSwap()
can be called using the micro
data directly, as seen above, or with an sdcMicro-Object. Parameters for
the swapping routine can be passed to the options-slot when creating the
sdcMicro-Object.
# define paramters
<- c("nuts1","nuts2")
hierarchy <- c("hincome","ageGroup","gender")
risk_variables <- 3
k_anonymity <- .05
swaprate <- "hid"
hid <- "hsize"
similar
# create sdcMicro object with parameters for recordSwap()
<- createSdcObj(dat,hhId = hid,
data_sdc keyVars=risk_variables,
options = list(k_anonymity = k_anonymity,
swaprate = swaprate,
similar = similar,
hierarchy = hierarchy))
<- recordSwap(data = data_sdc,
dat_swapped_sdc return_swapped_id = TRUE)
## Recordswapping was successful!
!duplicated(hid),.N,by=.(id_swapped = hid!=hid_swapped)] dat_swapped_sdc[
## id_swapped N
## 1: TRUE 500
## 2: FALSE 9500