Go to JKU Homepage
LIT Secure and Correct Systems Lab
What's that?

Institutes, schools, other departments, and programs create their own web content and menus.

To help you better navigate the site, see here where you are at the moment.

­­­​​​​​​​­​​​​​​​­​​​​​​​­

As much as software gets patched and updated from time to time, papers and theories develop and undergo an evolution, extensions and occasionally also corrections. As a big fan of scientific ethics and the Honkong principles on research integrity, opens an external URL in a new window, I list hereafter changes, updates and corrections that I either find myself or get told by others (which I also appreciate and acknowledge).

Errare humanum est, and may anyone cast the first stone, who is without sin. This is what escaped my eyes, as well as those of my collaborators and also the hard-working people doing scientific peer review.

Tell me who you are friends with and I will tell you who you are: Unique Neighborhoods in Random Graphs

Comments, Follow-up work and Supplementary material

Correction: typo in equation (2); should be an "=" after \(B_{ij}=(\mathbf{A}\cdot\mathbf{1}_{n\times n}-\mathbf{A}^2 )_{ij}=~... \) in equation (2).

It seems that random bipartite graphs are, under some conditions, also almost surely UNNs, but the proof (unpublished) needs further verification

Security Games over Lexicographic Orders

Comments, Follow-up work and Supplementary material

For convenience of the reader, and also to easy playing around with variants of the implementation, we provide the full MATLAB code for the examples provided in Appendix B.1 of the work. It has been written and tested in Octave, opens an external URL in a new window, version 5.2.0.

Cut-The-Rope: A Game of Stealthy Intrusion

Comments, Follow-up work, Updates and Corrections

For convenience of the reader, and also to easy playing around with variants of the implementation, we provide the full code of \(CutTheRope\) in \(R\), with some additional explanations inline in the code, and separately given here where more details may be needed: lines 2 until 27 are basically a preparation of the playground. We compile the graph from a collection of all routes in terms of numeric representatives, i.e., the circled numbers in Figure 1b from the paper. The action sets \(as1\) and \(as2\) for the defender and the attacker are computed from these sets likewise, excluding the \(target\) node as a trivial (winning) strategy for the attacker. The \(for\) loops in lines 27 and 28 merely iterate over all strategies for the defender and attacker, with the inner \(for\) loop over all adversary types (one per starting point; defined in line 20 of the code) is to evaluate the sum (equation (3)  in the paper).

Line 29 just prepares an empty vector to take up the utility distribution \( U\) from eq. (2)  in the paper later, and the variable \(path\) is the route along with the current adversary type works its way towards \(v_0\). The \( if\) in line 32 distinguishes two cases:

Case 1: The adversary (type) \(θ\) starts from a point on the \(j\)-th route, then we construct its utility by first reducing the path to the bit from \(θ\) until \(v_0\) (line 34), and second, evaluating a Poisson density for a random number \(D ∈ {0,1,2,…}\) of steps taken forward (line 34). This number of steps is limited by the total distance until \(v_0\), since the adversary won't take more steps than necessary to reach \(v_0\). Line 35 is then just the conditioning on "\(D ≤\) distance to \(v_0\)" which amounts to a humble renormalization of the density (line 35) evaluated only at the relevant points (line 34). Next, we *cut the rope* by determining whether the current defender's location \(i\) is on the route (\(which\) statement in line 36), or whether the attacker can take the full steps without running into the defender (note that if \(i\) is not on the path, then the \(which\) will return an error value, causing \(min\) to return its second argument automatically as being the only numeric input). The actual utility distribution (eq. (2)  in the paper) is then evaluated on the so reduced path \(π|c= route[1:cutPoint]\), by conditioning (i.e., renormalizing) the Poisson density to this bit of the route (line 37), and relocating the resulting masses to the nodes on the overall graph in line 39 (putting zero probability mass on all other locations; line 38).

Case 2: The adversary \(θ\) is on a different route than the currently considered, in which case there is no contribution made to the sum (3)  in the paper, simply because the term \(U(i,θ,λ)\) is undefined (the type \(θ\) attacker does not have the \(i\)-th strategy). Here, a slight technical issue arises, since we cannot plainly use a all-\(0\)-vector as a discrete distribution \(U\), as this would not be a proper distribution. We thus emulate the absence of a contribution to the utility by assigning a dominated utility distribution in that case, so that a \(⪯\)-maximizing attacker, even having the strategy \(i\) in its action space \(AS_2\), would never play that strategy (for it being strictly dominated). Our choice is the degenerate distribution putting mass 1 on the starting point (line 42). This effectively corresponds to the behavior of the attacker not moving at all (or moving back to the start), thus assigning zero probability in any equilibrium.

Having constructed the payoff for the type \(θ\) adversary, we add it to the overall payoff (eq. (3)  in the paper) weighted by \( Pr_Θ(θ)= Theta[type]\). Once this is done for all adversary types, we go embody the utility distribution \(U′\) in an object suitable for the subsequent equilibrium computation (line 46), and add it to the payoff structure for the \(CutTheRope\) instance (line 47). The remaining lines 51 and 52 are then only the construction of the game object (by a call to \(mosg\) to construct a multi-objective security game), and computing a multi-goal security strategy (call to \(mgss\) in line 52).

Let us leave the technical details of how games are constructed for the\(HyRiM\) package aside, as those are of no interest for the moment. The parameters directly correspond to telling the system that we have a discrete distribution (\(discrete = TRUE\)), specified as a probability density function (\(type =\;"\!\!pdf\!"\)) supported on the set \(V\) (\(supp = range(V)\), by default assumed to be a set of consecutive integers). The remaining parameters are explained below, which the user may safely skip without loosing anything for the discussion of the resulting PBE.

By convention of the HyRiM package for R, opens an external URL in a new window, the support of an admissible \(lossDistribution\) density consists only of points having positive mass. The package depends on this to be satisfied, but our game construction will in many cases assign zero mass to some points in \(V\), since it cuts attack paths at the starting point \(θ\) and the defender's inspected location \(i\). We can bypass this technical requirement by smoothing the loss distribution \(U′\) with a Gaussian kernel of specified bandwidth \(bw\) to assign a little bit probability mass to all locations in all cases. This is the meaning of the two parameters \(smoothing = \;" \!\!always\!", bw = 0.2\). From a game theoretic perspective, this is nothing else than the addition of a small "tremble" to the adversary's play: like in a *trembling hands equilibrium*, it merely means that we enforce mixed strategies, by assuming that no matter of whether the attacker intends to play \(j\), there is always a tiny chance to end up playing \(j'\)\(j\) with some likelihood \(ε>0\). This quantity is explicitly assigned by kernel smoothing within the \(HyRiM\) package, and letting \(ε → 0\) is the same as specifying a vanishing sequence of values for the parameter \(bw\) here.

HyRiM Package for R

Comments, Follow-up work, Updates and Corrections

The package is availabe from the official CRAN repository, opens an external URL in a new window. The following is a list of updates and versions:

06.12.2022 (version 2.0.2)

  • CHANGE: updated manual pages by adding new (recent) literature references and revising explanations

09.05.2022

09.06.2021 (version 2.0.1)

  • BUGFIX: added tolerance parameter "tol" to function mgss, to account for roundoff errors behind the 5th digit after the comma when solving the linear programs

05.05.2021

  • BUGFIX: in some cases, numeric roundoff errors encountered internally in the constraints of the LPs made the linear programs infeasible. Added an optional tolerance parameter "tol", defaulting to zero for exact solutions, but usable to fix the roundoff errors if needed.

04.2020:

  • CHANGE: replaced fictitious play to compute approximate security strategies by an exact algorithm using linear programming. This will be version 2.0.0 of the package, pending for release over CRAN. It is no longer downwards compatible regarding the function \(mgss\) to code using version 1.x.x. Thanks to Ali Alshawish and Vincent Bürgin for reporting the convergence issues with fictitious play that led to this change.
  • FEATURE: added support for real-valued multi-criteria games that use lexicographically ordered payoffs.
  • FEATURE: extended the function \(disappointmentRate\) to take matrix games over the reals and compute the disappointment rate directly for them, or for a given equilibrium therein.
  • FEATURE: added convenience function for extraction and replacement of parts of a game's payoff structure (in the usual R-syntax \(x[i,j,..] <-\; replacement\), or \(x[i,j,k] \) for extraction).
  • FEATURE: added an option \(cleanUp = FALSE\) to plot.mosg to retain the SVG file for subsequent use elsewhere (e.g., in papers)

11.2019:

  • BUGFIX: error in \(moment\) function; thankfully submitted (with correction) by Ali Alshawish.

07.2019:

  • FEATURE: added vignette paper (displayable via \(RShowDoc("\!\!vignette\!", type=\;"\!\!pdf\!", package=\;"\!\!\!HyRiM\!")\))

09.2018:

  • BUGFIX: function \(mosg\) incorrectly handled the parameter setting \(byrow = FALSE\).
  • BUGFIX: \(moment\) was on categorical distributions incorrectly using \( range\) rather than \(supp\).

07.2018:

  • BUGFIX: games with numeric payoffs fail if one of the payoffs is an integer power of 10 (so that the internal distributions become degenerate in this case).
  • BUGFIX: \(preference\) function incorrectly ignored all but the last goal.

04.2018:

  • FEATURE: added "disappointment rate" function.

02.2018:

  • BUGFIX: preference function returned wrong results for relations between numbers and distributions.

01.2018:

  • BUGFIX: preference among discrete distributions with different ranges but equal support was occasionally computed incorrectly.

08.2017:

  • FEATURE:\( plot.mosg\) now takes arguments xlim and ylim to make all plots comparable by identical axes ranges.
  • BUGFIX: improvement to the implementation of density (now returns zero only for categories as they are specified; nothing more).
  • BUGFIX: \(print.mosg\) did not list the goals by name (now does).
  • BUGFIX: density, when applied to continuous distributions, returned zero for vector arguments in some occasions.

07.2017:

  • BUGFIX: \(range\) now properly updated after smoothing a density (previously, the range was set based on the data, but not updated after smoothing, which caused errors when moments were computed (visible by warning messages)).
  • FEATURE: if loss distributions with empty categories are supplied, the error now tells where the problem occurred in the game (row, column and goal).

Perfectly Secure Communication, based on Graph-Topological Addressing in Unique-Neighborhood Networks

Comments, Follow-up work, Updates and Corrections

The new version as of April 2020 is a refinement of the concept of a unique neighborhood network (UNN) towards distinguishing weak from strong UNNs, and with distinct algebraic conditions to classify the two types. The two classes lend themselves differently well to authentication matters, which motivated this refinement.

Defending Against Advanced Persistent Threats using Game-Theory

Comments, Follow-up work and Supplementary material

Throughout the text, the term "equilibrium" is to be understood as "lexicographic Nash equilibrium", as defined in this arxiv preprint, opens an external URL in a new window. A corresponding correction, opens an external URL in a new window has been uploaded on arxiv, opens an external URL in a new window, and been submitted to the journal, where it is currently pending for review and publication.

Game Theory for Security and Risk Management: From Theory to Practice

Comments, Follow-up work, Updates and Corrections

  • Lemma 2.1 and Theorem 2.2 both need the additional hypothesis of \(f_1(a)≠f_2(a)\) at \(a=max(Ω)\) and with \(Ω\) being the union of both supports. (Thanks to Vincent Bürgin for pointing this out by providing a respective example). Since this condition is satisfiable by truncating the distributions at any smaller value \(a−ε\) for some \(ε>0\), Theorem 2.1 only asserts the invariance w.r.t. the ultrafilter on a dense subset of the whole space of probability distributions. See also the respectively updated preprint on arxiv, opens an external URL in a new window for more details.
  • page 78, Table 4.1: Typo "semetric" should be "symmetric"
  • page 96: "Neither is bounded rationality not equal to" should be "Neither is bounded rationality equal to"
  • page 101: "Based on expected utilities (for p being 1 million" shoul be "p_1" in this bit of the sentence
  • page 101: "certainty of becoming rich, as opposed to picking q_2 for the "; should be "q_1" in this bit of the sentence
  • page 103: "utility for the latter should be less than" => "utility for the latter should be larger than"
  • page 104: for q_B, it should be "7% (red)" (instead of "6% (red)" in the second line)
  • page 132: "who does not have incomplete information" => "who does not have complete information"

Decision Making When Consequences Are Random

Comments, Follow-up work, Updates and Corrections

Lemma 2.1 and Theorem 2.2 both need the additional hypothesis of \(f_1(a)≠f_2(a)\) at \(a=max(Ω)\) and with \(Ω\) being the union of both supports. (Thanks to Vincent Bürgin for pointing this out by providing a respective example). Since this condition is satisfiable by truncating the distributions at any smaller value \(a−ε\) for some \(ε>0\), Theorem 2.1 only asserts the invariance w.r.t. the ultrafilter on a dense subset of the whole space of probability distributions. See also the respectively updated preprint on arxiv, opens an external URL in a new window for more details.

Decisions with Uncertain Consequences-A Total Ordering on Loss-Distributions

Comments, Follow-up work, Updates and Corrections

Lemma 2.1 and Theorem 2.2 both need the additional hypothesis of \(f_1(a)≠f_2(a)\) at \(a=max(Ω)\) and with \(Ω\) being the union of both supports. (Thanks to Vincent Bürgin for pointing this out by providing a respective example). Since this condition is satisfiable by truncating the distributions at any smaller value \(a−ε\) for some \(ε>0\), Theorem 2.1 only asserts the invariance w.r.t. the ultrafilter on a dense subset of the whole space of probability distributions. See the correction to the paper uploaded on arxiv, opens an external URL in a new window, as well as a prior work preprint on arxiv, opens an external URL in a new window for more details.

Uncertainty in Games: Using Probability-Distributions as Payoffs

Comments, Follow-up work, Updates and Corrections

Lemma 1 and Theorem 1 both need a more restricted set of distributions, requiring that for all densities   \(f_1, f_2\) that \(f_1(a)≠f_2(a)\) at \(a=max(Ω)\) and with \(Ω\) being the union of both supports. (Thanks to Vincent Bürgin for pointing this out by providing a respective example). Since this condition is satisfiable by truncating the distributions at any smaller value \(a−ε\) for some \(ε>0\), Theorem 1 only asserts the invariance w.r.t. the ultrafilter on a dense subset of the whole space of probability distributions. See also the respectively updated preprint on arxiv, opens an external URL in a new window for more details. Theorem 2 is to be understood as asserting the existence of Nash equilibria inside the hyperreal numbers, i.e., the probabilities are themselves hyperreal and do not necessarily correspond (nor are convertible to) Nash equilibrium distributions expressed in real numbers.

Computer Aided Teaching of Elliptic Curve Cryptography

Comments, Follow-up work, Updates and Corrections

  • The system has been implemented and is freely available on github, opens an external URL in a new window, as well as running as a cloud service found here., opens an external URL in a new window.
  • The implementation of pairings was anticipated in this past paper, but later was done slightly different in the actual implementation. In the paper, it was proposed as the declaration of its own data type, whereas the actual system now offers a built-in function \( TLPairing\) for that purpose. This is a small change to what the paper said.

Dynamic Proofs of Retrievability from Chameleon-Hashes

Comments, Follow-up work, Updates and Corrections

The system has been implemented as an experimental prototype and is available on gituhb, opens an external URL in a new window.

Research Related to Sunset/FFapl

Comments, Follow-up work, Updates and Corrections

The system has been implemented and is freely available on github, opens an external URL in a new window, as well as running as a cloud service found here., opens an external URL in a new window

Visual Risk Specification and Aggregation

Comments, Follow-up work, Updates and Corrections

In section II, the user inputs were modeled as Gaussian variables, whose variance should be \(\frac 1 3\left(\frac{i_{\max}-i_{\min}}2\right)\), and \(\frac 1 3\left(\frac{\ell_{\max}-\ell_{\min}}2\right)\), i.e., it should read \(\begin{align} X_I&\sim \mathcal{N}\left(\frac 1 2(i_{\max}+i_{\min}), \frac 1 3\Big(\frac 1 2 (i_{\max}-i_{\min})\Big)\right),\\ X_L&\sim \mathcal{N}\left(\frac 1 2(\ell_{\max}+\ell_{\min}), \frac 1 3\Big(\frac 1 2 (\ell_{\max}-\ell_{\min})\Big)\right) \end{align} \)

on page 2 of the paper. The same correction is also necessary in Section III.A in the opinion pooling method, with the implied changes to the pictures illustrating the method by examples. The error is only in the numbers, not in the concept, however.