Share this post on:

Disregarding the GGTI298 biological activity directionality of directed networks. Clustering coefficient c is usually Caspase-3 Inhibitor supplier defined as the link density of its neighborhood [32]: c?2t ; k ?1???where t is the number of linked neighbors and k(k-1)/2 is the maximum possible number, c = 0 for k 1. The mean hci is denoted network clustering coefficient [32], while the clustering mixing rc is defined fpsyg.2017.00209 as before. Clustering profile journal.pone.0077579 gives the mean clustering Ck of nodes with degree k [48]. Note that the denominator in the equation above introduces biases when r < 0 [33]. Thus, we rely on delta-corrected clustering coefficient b, defined as c / [49], where is the maximal degree k and b = 0 for k 1. Similarly, degree-corrected clustering coefficient d isPLOS ONE | DOI:10.1371/journal.pone.0127390 May 18,11 /Consistency of Databasesdefined as t/ [33], where is the maximum number of linked neighbors with respect to their degrees k and d = 0 for k 1. From definition it follows b c d. Diameter statistics. All diameter statistics were computed disregarding the directionality of directed networks. Hop plot shows the percentage of mutually reachable pairs of nodes H() with hops [50]. The network diameter is defined as the minimal number of hops for which H() = 1, while the effective diameter 90 is the number of hops at which 90 of such pairs of nodes are reachable [50], H(90) = 0.9. Hop plots are averaged over 100 realizations of the approximate neighborhood function with 32 trials [51]. Multidimensional scaling (MDS). MDS is a statistical technique that visualizes the level of similarity of individual objects of a dataset. From the range of the available MDS techniques, we used the non-metric multidimensional scaling (NMDS), which work as follows. Given are h objects (or points) defined via their coordinates in l dimensions. This situation is expressed via h ?l matrix called H. From this original matrix H we compute the dissimilarity h ?h matrix D, in which each matrix element D(i, j) represents the Euclidean distance between the pair of objects i and j in the original matrix H. NMDS reduces the dimensionality of the problem, by transforming the h ?h matrix D into a h ?p matrix Y, where h is the number of objects (or points), now embedded in p dimensions instead of l (p < l) [31]. The Euclidean distances between the obtained h points in Y are a monotonic transformation of the points in D in p dimensions. In our analysis, we used a original matrix H with size of 6 ?20, meaning that the number of points (data basis) is h = 6 and the number of coordinates is l = 20. The original matrix H is transformed into dissimilarity matrix D with size of 6 ?6. Using NMDS we transformed the matrix D into two matrices Y0 and Y00 , so that Y0 has a size of 6 ?2, and Y00 has a size of 6 ?3. Externally studentized residuals. Let xij be the value of j-th network measure of i-th database, where N is the number of databases, N = 6. Corresponding externally studentized residual ^ x ij is: ^ x ij ?^ xij ?m ij pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ; ^ s ij 1 ?1=N ??^ ^ where m ij and s ij are the sample mean and the corrected standard deviation excluding the conP P 2 ^ ^ ij ^ sidered i-th database, m ij ?k6 xkj = ?1?and s 2 ?k6 kj ?m ij ?= ?2? Assuming ^ that the errors in x are independent and normally distributed, the residuals x have Student tdistribution with N-2 degrees of freedom. Significant differences in individual statistics x are revealed by the independent two-tailed Student.Disregarding the directionality of directed networks. Clustering coefficient c is usually defined as the link density of its neighborhood [32]: c?2t ; k ?1???where t is the number of linked neighbors and k(k-1)/2 is the maximum possible number, c = 0 for k 1. The mean hci is denoted network clustering coefficient [32], while the clustering mixing rc is defined fpsyg.2017.00209 as before. Clustering profile journal.pone.0077579 gives the mean clustering Ck of nodes with degree k [48]. Note that the denominator in the equation above introduces biases when r < 0 [33]. Thus, we rely on delta-corrected clustering coefficient b, defined as c / [49], where is the maximal degree k and b = 0 for k 1. Similarly, degree-corrected clustering coefficient d isPLOS ONE | DOI:10.1371/journal.pone.0127390 May 18,11 /Consistency of Databasesdefined as t/ [33], where is the maximum number of linked neighbors with respect to their degrees k and d = 0 for k 1. From definition it follows b c d. Diameter statistics. All diameter statistics were computed disregarding the directionality of directed networks. Hop plot shows the percentage of mutually reachable pairs of nodes H() with hops [50]. The network diameter is defined as the minimal number of hops for which H() = 1, while the effective diameter 90 is the number of hops at which 90 of such pairs of nodes are reachable [50], H(90) = 0.9. Hop plots are averaged over 100 realizations of the approximate neighborhood function with 32 trials [51]. Multidimensional scaling (MDS). MDS is a statistical technique that visualizes the level of similarity of individual objects of a dataset. From the range of the available MDS techniques, we used the non-metric multidimensional scaling (NMDS), which work as follows. Given are h objects (or points) defined via their coordinates in l dimensions. This situation is expressed via h ?l matrix called H. From this original matrix H we compute the dissimilarity h ?h matrix D, in which each matrix element D(i, j) represents the Euclidean distance between the pair of objects i and j in the original matrix H. NMDS reduces the dimensionality of the problem, by transforming the h ?h matrix D into a h ?p matrix Y, where h is the number of objects (or points), now embedded in p dimensions instead of l (p < l) [31]. The Euclidean distances between the obtained h points in Y are a monotonic transformation of the points in D in p dimensions. In our analysis, we used a original matrix H with size of 6 ?20, meaning that the number of points (data basis) is h = 6 and the number of coordinates is l = 20. The original matrix H is transformed into dissimilarity matrix D with size of 6 ?6. Using NMDS we transformed the matrix D into two matrices Y0 and Y00 , so that Y0 has a size of 6 ?2, and Y00 has a size of 6 ?3. Externally studentized residuals. Let xij be the value of j-th network measure of i-th database, where N is the number of databases, N = 6. Corresponding externally studentized residual ^ x ij is: ^ x ij ?^ xij ?m ij pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ; ^ s ij 1 ?1=N ??^ ^ where m ij and s ij are the sample mean and the corrected standard deviation excluding the conP P 2 ^ ^ ij ^ sidered i-th database, m ij ?k6 xkj = ?1?and s 2 ?k6 kj ?m ij ?= ?2? Assuming ^ that the errors in x are independent and normally distributed, the residuals x have Student tdistribution with N-2 degrees of freedom. Significant differences in individual statistics x are revealed by the independent two-tailed Student.

Share this post on: