Skip to content

Commit

Permalink
Publications API was updated by GitHub Actions
Browse files Browse the repository at this point in the history
  • Loading branch information
github-actions committed Sep 24, 2024
1 parent 75f5a4f commit 560585c
Show file tree
Hide file tree
Showing 28 changed files with 204 additions and 204 deletions.
2 changes: 1 addition & 1 deletion publications/v1/10.1007/s10623-022-01028-0/index.json
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
"title": "Standard model leakage-resilient authenticated key exchange using inner-product extractors",
"venue": "Designs, Codes and Cryptography",
"year": "2022",
"abstract": "With the development of side-channel attacks, a necessity arises to invent authenticated key exchange protocols in a leakage-resilient manner. Constructing authenticated key exchange protocols using existing cryptographic schemes is an effective method, as such construction can be instantiated with any appropriate scheme in a way that the formal security argument remains valid. In parallel, constructing authenticated key exchange protocols that are proven to be secure in the standard model is more preferred as they rely on real-world assumptions. In this paper, we present a Diffie\u00e2\u0080\u0093Hellman-style construction of a leakage-resilient authenticated key exchange protocol, that can be instantiated with any CCLA2-secure public-key encryption scheme and a function from the pseudo-random function family. Our protocol is proven to be secure in the standard model assuming the hardness of the decisional Diffie\u00e2\u0080\u0093Hellman problem. Furthermore, it is resilient to continuous partial leakage of long-term secret keys, that happens even after the session key is established, while satisfying the security features defined by the eCK security model.",
"abstract": "With the development of side-channel attacks, a necessity arises to invent authenticated key exchange protocols in a leakage-resilient manner. Constructing authenticated key exchange protocols using existing cryptographic schemes is an effective method, as such construction can be instantiated with any appropriate scheme in a way that the formal security argument remains valid. In parallel, constructing authenticated key exchange protocols that are proven to be secure in the standard model is more preferred as they rely on real-world assumptions. In this paper, we present a Diffie\u2013Hellman-style construction of a leakage-resilient authenticated key exchange protocol, that can be instantiated with any CCLA2-secure public-key encryption scheme and a function from the pseudo-random function family. Our protocol is proven to be secure in the standard model assuming the hardness of the decisional Diffie\u2013Hellman problem. Furthermore, it is resilient to continuous partial leakage of long-term secret keys, that happens even after the session key is established, while satisfying the security features defined by the eCK security model.",
"authors": [
"Janaka Alawatugoda",
"Tatsuaki Okamoto"
Expand Down
6 changes: 3 additions & 3 deletions publications/v1/10.1016/j.eswa.2021.116078/index.json
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,12 @@
"title": "Random subspace and random projection nearest neighbor ensembles for high dimensional data",
"venue": "Elsevier Expert systems with applications",
"year": "2022",
"abstract": "The random subspace and the random projection methods are investigated and compared as techniques for forming ensembles of nearest neighbor classifiers in high dimensional feature spaces. The two methods have been empirically evaluated on three types of high-dimensional datasets: microarrays, chemoinformatics, and images. Experimental results on 34 datasets show that both the random subspace and the random projection method lead to improvements in predictive performance compared to using the standard nearest neighbor classifier, while the best method to use depends on the type of data considered; for the microarray and chemoinformatics datasets, random projection outperforms the random subspace method, while the opposite holds for the image datasets. An analysis using data complexity measures, such as attribute to instance ratio and Fisher\u00e2\u0080\u0099s discriminant ratio, provide some more detailed indications on what relative performance can be expected for specific datasets. The results also indicate that the resulting ensembles may be competitive with state-of-the-art ensemble classifiers; the nearest neighbor ensembles using random projection perform on par with random forests for the microarray and chemoinformatics datasets.",
"abstract": "The random subspace and the random projection methods are investigated and compared as techniques for forming ensembles of nearest neighbor classifiers in high dimensional feature spaces. The two methods have been empirically evaluated on three types of high-dimensional datasets: microarrays, chemoinformatics, and images. Experimental results on 34 datasets show that both the random subspace and the random projection method lead to improvements in predictive performance compared to using the standard nearest neighbor classifier, while the best method to use depends on the type of data considered; for the microarray and chemoinformatics datasets, random projection outperforms the random subspace method, while the opposite holds for the image datasets. An analysis using data complexity measures, such as attribute to instance ratio and Fisher\u2019s discriminant ratio, provide some more detailed indications on what relative performance can be expected for specific datasets. The results also indicate that the resulting ensembles may be competitive with state-of-the-art ensemble classifiers; the nearest neighbor ensembles using random projection perform on par with random forests for the microarray and chemoinformatics datasets.",
"authors": [
"Sampath Deegalla",
"Keerthi Walgama",
"Panagiotis Papapetrou",
"Henrik Bostr\u00c3\u00b6m"
"Henrik Bostr\u00f6m"
],
"author_info": [
{
Expand Down Expand Up @@ -38,7 +38,7 @@
"profile_url": "#"
},
{
"name": "Henrik Bostr\u00c3\u00b6m",
"name": "Henrik Bostr\u00f6m",
"profile": "#",
"type": "OUTSIDER",
"id": "",
Expand Down
2 changes: 1 addition & 1 deletion publications/v1/10.1038/s42003-020-01270-z/index.json
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
"title": "Genopo: a nanopore sequencing analysis toolkit for portable Android devices",
"venue": "Communications Biology",
"year": "2020",
"abstract": "The advent of portable nanopore sequencing devices has enabled DNA and RNA sequencing to be performed in the field or the clinic. However, advances in in situ genomics require parallel development of portable, offline solutions for the computational analysis of sequencing data. Here we introduce Genopo, a mobile toolkit for nanopore sequencing analysis. Genopo compacts popular bioinformatics tools to an Android application, enabling fully portable computation. To demonstrate its utility for in situ genome analysis, we use Genopo to determine the complete genome sequence of the human coronavirus SARS-CoV-2 in nine patient isolates sequenced on a nanopore device, with Genopo executing this workflow in less than 30\u00e2\u0080\u0089min per sample on a range of popular smartphones. We further show how Genopo can be used to profile DNA methylation in a human genome sample, illustrating a flexible, efficient architecture that is suitable to run many popular bioinformatics tools and accommodate small or large genomes. As the first ever smartphone application for nanopore sequencing analysis, Genopo enables the genomics community to harness this cheap, ubiquitous computational resource.",
"abstract": "The advent of portable nanopore sequencing devices has enabled DNA and RNA sequencing to be performed in the field or the clinic. However, advances in in situ genomics require parallel development of portable, offline solutions for the computational analysis of sequencing data. Here we introduce Genopo, a mobile toolkit for nanopore sequencing analysis. Genopo compacts popular bioinformatics tools to an Android application, enabling fully portable computation. To demonstrate its utility for in situ genome analysis, we use Genopo to determine the complete genome sequence of the human coronavirus SARS-CoV-2 in nine patient isolates sequenced on a nanopore device, with Genopo executing this workflow in less than 30\u2009min per sample on a range of popular smartphones. We further show how Genopo can be used to profile DNA methylation in a human genome sample, illustrating a flexible, efficient architecture that is suitable to run many popular bioinformatics tools and accommodate small or large genomes. As the first ever smartphone application for nanopore sequencing analysis, Genopo enables the genomics community to harness this cheap, ubiquitous computational resource.",
"authors": [
"Hiruna Samarakoon",
"Sanoj Punchihewa",
Expand Down
2 changes: 1 addition & 1 deletion publications/v1/10.1093/comjnl/bxaa055/index.json
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
"title": "Public-key encryption in the standard model against strong leakage adversary",
"venue": "The Computer Journal",
"year": "2020",
"abstract": "Over the years, security against adaptively chosen-ciphertext attacks (CCA2) is considered as the strongest security definition for public-key encryption schemes. With the uprise of side-channel attacks, new security definitions are proposed, addressing leakage of secret keys together with the standard CCA2 definition. Among the new security definitions, security against continuous and after-the-fact leakage-resilient CCA2 can be considered as the strongest security definition, which is called as security against (continuous) adaptively chosen-ciphertext leakage attacks (continuous CCLA2). In this paper, we present a construction of a public-key encryption scheme, namely LR-PKE, which satisfies the aforementioned security definition. The security of our public-key encryption scheme is proven in the standard model, under decision BDH assumption. Thus, we emphasize that our public-key encryption scheme LR-PKE is (continuous) CCLA2-secure in the standard model. For our construction of LR-PKE, we have used a strong one-time signature scheme and a leakage-resilient refreshing protocol as underlying building blocks. The leakage bound is 0.15nlogp\u00e2\u0088\u00921 bits per leakage query, for a security parameter k and a statistical security parameter n\u00e2\u0081\u00a0, such that logp\u00e2\u0089\u00a5k and n is a function of k\u00e2\u0081\u00a0. It is possible to see that LR-PKE is efficient enough to be used for real-world usage.",
"abstract": "Over the years, security against adaptively chosen-ciphertext attacks (CCA2) is considered as the strongest security definition for public-key encryption schemes. With the uprise of side-channel attacks, new security definitions are proposed, addressing leakage of secret keys together with the standard CCA2 definition. Among the new security definitions, security against continuous and after-the-fact leakage-resilient CCA2 can be considered as the strongest security definition, which is called as security against (continuous) adaptively chosen-ciphertext leakage attacks (continuous CCLA2). In this paper, we present a construction of a public-key encryption scheme, namely LR-PKE, which satisfies the aforementioned security definition. The security of our public-key encryption scheme is proven in the standard model, under decision BDH assumption. Thus, we emphasize that our public-key encryption scheme LR-PKE is (continuous) CCLA2-secure in the standard model. For our construction of LR-PKE, we have used a strong one-time signature scheme and a leakage-resilient refreshing protocol as underlying building blocks. The leakage bound is 0.15nlogp\u22121 bits per leakage query, for a security parameter k and a statistical security parameter n\u2060, such that logp\u2265k and n is a function of k\u2060. It is possible to see that LR-PKE is efficient enough to be used for real-world usage.",
"authors": [
"Janaka Alawatugoda"
],
Expand Down
2 changes: 1 addition & 1 deletion publications/v1/10.1109/ICIAFS.2014.7069624/index.json
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
{
"title": "A structured hardware software architecture for peptide based diagnosis \u00e2\u0080\u0094 Sub-string matching problem with limited tolerance",
"title": "A structured hardware software architecture for peptide based diagnosis \u2014 Sub-string matching problem with limited tolerance",
"venue": "2014 7th International Conference on Information and Automation for Sustainability",
"year": "2014",
"abstract": "The problem of inferring proteins from complex peptide samples in shotgun proteomic workflow sets extreme demands on computational resources in respect of the required very high processing throughputs, rapid processing rates and reliability of results. This is exacerbated by the fact that, in general, a given protein cannot be defined by a fixed sequence of amino acids due to the existence of splice variants and isoforms of that protein. Therefore, the problem of protein inference could be considered as one of identifying sequences of amino acids with some limited tolerance. Two problems arise from this: a) due to these (permitted) variations, the applicability of exact string matching methodologies could be questioned and b) the difficulty of defining a reference (peptide/amino acid) sequence for a particular set of proteins that are functionally indistinguishable, but with some variation in features. This paper presents a model-based hardware acceleration of a structured and practical inference approach that is developed and validated to solve the inference problem in a mass spectrometry experiment of realistic size. Our approach starts from an examination of the known set of splice variants and isoforms of a target protein to identify the Greatest Common Stable Substring (GCSS) of amino acids and the Substrings Subjects to Limited Variation (SSLV) and their respective locations on the GCSS. The hypothesis made here is that these latter substrings (SSLV) appear inside complete peptides and not cutting across peptide boundaries. Then we define and solve the Sub-string Matching Problem with Limited Tolerance (SMPLT) using the Bit-Split Aho Corasick Algorithm with Limited Tolerance (BSACLT) that we define and automate. This approach is validated on identified peptides in a labelled and clustered data set from UNIPROT. A model-based hardware software co-design strategy is used to accelerate the computational workflow of above described protein inference problem. Identification of Baylisascaris Procyonis infection was used as an application instance. This workflow can be generalised to any inexact multiple pattern matching application by replacing the patterns in a clustered and distributed environment which permits a distance between member strings to account for permitted deviations such as substitutions, insertions and deletions. The co-designed workflow achieved up to 70 times maximum speed-up compared to a similar workflow purely run on the processor used for co-design.",
Expand Down
2 changes: 1 addition & 1 deletion publications/v1/10.1109/ICIAFS.2016.7946528/index.json
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
"title": "Accelerating k-nn classification algorithm using graphics processing units",
"venue": "2016 IEEE International Conference on Information and Automation for Sustainability (ICIAfS)",
"year": "2016",
"abstract": "k-Nearest Neighbor (k-NN) is a well-known classification algorithm used in many machine learning applications. When the input data size of k-NN algorithm increases, the execution time of the algorithm increases significantly. Therefore, this becomes a bottleneck for practical usage. Since k-NN algorithm is using searching, sorting and other parallelly executable tasks, we have implemented the k-NN algorithm on a GPU using CUDA utilizing the parallel tasks. The results were promising as expected and for 43,500 training records and 14,500 testing records with nine attributes, the GPU execution showed about a 100\u00c3\u0097 faster execution time compared to a CPU version.",
"abstract": "k-Nearest Neighbor (k-NN) is a well-known classification algorithm used in many machine learning applications. When the input data size of k-NN algorithm increases, the execution time of the algorithm increases significantly. Therefore, this becomes a bottleneck for practical usage. Since k-NN algorithm is using searching, sorting and other parallelly executable tasks, we have implemented the k-NN algorithm on a GPU using CUDA utilizing the parallel tasks. The results were promising as expected and for 43,500 training records and 14,500 testing records with nine attributes, the GPU execution showed about a 100\u00d7 faster execution time compared to a CPU version.",
"authors": [
"S Selvaluxmiy",
"TN Kumara",
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
{
"title": "Revealing MicroRNA Biomarkers for Alzheimer\u00e2\u0080\u0099s Disease Using Next Generation Sequencing Data",
"title": "Revealing MicroRNA Biomarkers for Alzheimer\u2019s Disease Using Next Generation Sequencing Data",
"venue": "2021 10th International Conference on Information and Automation for Sustainability (ICIAfS)",
"year": "2021",
"abstract": "",
Expand Down
2 changes: 1 addition & 1 deletion publications/v1/10.1109/ICIINFS.2011.6038120/index.json
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
"title": "Statechart based modeling and controller implementation of complex reactive systems",
"venue": "2011 6th International Conference on Industrial and Information Systems (ICIIS)",
"year": "2011",
"abstract": "Statechart formalism has been a preferred choice for modeling complex reactive systems (CRS) in recent years. It has inbuilt powerful features of orthogonality, hierarchy, intermodular communication and history. Once statechart based system modeling is done the next issues to be addressed are (1) modular verification of the system for failsafe operation under all possible working conditions (2) progressive controller implementation together with the supervisory control while maintaining traceability and re-configurability and (3) facilitation of controller adaptation for progressive incorporation of security features and supervisory specifications. An elevator system was designed and built to reflect exigencies of a typical CRS hardware/software platform. A controller was designed to meet the above requirements and tested on the platform to validate the feasibility of model-based control design/verification methodology for real scale systems. Modularity was achieved by developing the statechart model of the plant into a tree of communicating language generators. Progresively verified modules were then translated into sequential function charts (SFC) which were finally integrated to form a complete flat SFC. The SFC was then implemented on a PLC platform (Telemechanique). The program was first validated in simulation using Telemechanique \u00e2\u0080\u009cTwidosuite\u00e2\u0080\u009d for different operating conditions and finally tested on the elevator system.",
"abstract": "Statechart formalism has been a preferred choice for modeling complex reactive systems (CRS) in recent years. It has inbuilt powerful features of orthogonality, hierarchy, intermodular communication and history. Once statechart based system modeling is done the next issues to be addressed are (1) modular verification of the system for failsafe operation under all possible working conditions (2) progressive controller implementation together with the supervisory control while maintaining traceability and re-configurability and (3) facilitation of controller adaptation for progressive incorporation of security features and supervisory specifications. An elevator system was designed and built to reflect exigencies of a typical CRS hardware/software platform. A controller was designed to meet the above requirements and tested on the platform to validate the feasibility of model-based control design/verification methodology for real scale systems. Modularity was achieved by developing the statechart model of the plant into a tree of communicating language generators. Progresively verified modules were then translated into sequential function charts (SFC) which were finally integrated to form a complete flat SFC. The SFC was then implemented on a PLC platform (Telemechanique). The program was first validated in simulation using Telemechanique \u201cTwidosuite\u201d for different operating conditions and finally tested on the elevator system.",
"authors": [
"AC Vidanapathirana",
"SD Dewasurendra",
Expand Down
Loading

0 comments on commit 560585c

Please sign in to comment.