Publikasjoner
-
Bhattarai, Bimal; Saha, Rupsa; Granmo, Ole-Christoffer; Zadorozhny, Vladimir & Xu, Jiawei
(2023).
A Logic-Based Explainable Framework for Relation Classification of Human Rights Violations.
CEUR Workshop Proceedings.
ISSN 1613-0073.
3464,
s. 14–21.
Vis sammendrag
Using a Relational Tsetlin Machine (RTM) for analysis of semi-structured data allows the use of inherent relational structures
present in natural language text to get an explainable classification of data. A finite Herbrand model derives Horn Clauses
from the model, which are simple yet powerful logical tools that can build an abstract view of the world. We use the same to
analyze human rights violation data. We show concretely how natural language can be transformed into a relational structure,
and further use the Relational Tsetlin Machine to not only classify incidents as serious and non-serious violations but also explore the patterns learned by the RTM in order to arrive that those decisions. Furthermore, the distilled Horn Clauses show a precise understanding of the concepts involved without the drawback of textual ambiguity.
-
Abeyrathna, Kuruge Darshana; Abouzeid, Ahmed Abdulrahem Othman; Bhattarai, Bimal; Giri, Charul; Glimsdal, Sondre & Granmo, Ole-Christoffer
[Vis alle 11 forfattere av denne artikkelen]
(2023).
Building Concise Logical Patterns by Constraining Tsetlin Machine Clause Size
.
I Elkind, Edith (Red.),
Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence.
AAAI Press.
ISSN 978-1-956792-03-4.
s. 3395–3403.
doi:
10.24963/ijcai.2023/378.
Fulltekst i vitenarkiv
Vis sammendrag
Tsetlin machine (TM) is a logic-based machine
learning approach with the crucial advantages of
being transparent and hardware-friendly. While
TMs match or surpass deep learning accuracy for
an increasing number of applications, large clause
pools tend to produce clauses with many literals
(long clauses). As such, they become less interpretable. Further, longer clauses increase the
switching activity of the clause logic in hardware,
consuming more power. This paper introduces a
novel variant of TM learning – Clause Size Constrained TMs (CSC-TMs) – where one can set a
soft constraint on the clause size. As soon as a
clause includes more literals than the constraint allows, it starts expelling literals. Accordingly, oversized clauses only appear transiently. To evaluate
CSC-TM, we conduct classifcation, clustering, and
regression experiments on tabular data, natural language text, images, and board games. Our results
show that CSC-TM maintains accuracy with up to
80 times fewer literals. Indeed, the accuracy increases with shorter clauses for TREC, IMDb, and
BBC Sports. After the accuracy peaks, it drops
gracefully as the clause size approaches a single literal. We fnally analyze CSC-TM power consumption and derive new convergence properties.
-
Glimsdal, Sondre; Saha, Rupsa; Bhattarai, Bimal; Giri, Charul; Sharma, Jivitesh & Tunheim, Svein Anders
[Vis alle 7 forfattere av denne artikkelen]
(2022).
Focused Negative Sampling for Increased Discriminative Power in Tsetlin Machines.
I Shafik, Rishad (Red.),
2022 International Symposium on the Tsetlin Machine (ISTM 2022).
IEEE conference proceedings.
ISSN 978-1-6654-7116-9.
s. 73–80.
doi:
10.1109/ISTM54910.2022.00021.
Vis sammendrag
Tsetlin Machines learn from input data by creating
patterns in propositional logical, using the literals available in
the data. These patterns vote for the classes in a classification
task. Despite their simplistic premise, Tsetlin machine (TM)s
have been performing at with other popular machine learning
methods across various benchmarks. Not only accuracy, TMs
also perform well in terms of energy efficiency and learning
speed. The general TM scheme works best when there is sufficient
discriminatory information available between two classes. In this
paper, we explore the use of focused negative sampling (FNS) to
discriminate between classes which are not easily distinguishable
from each other. We carry out experiments across diverse classification tasks ranging over natural language processing, image
processing, reinforcement learning to show that this approach
forces the TM to arrive at patterns that can successfully tell
apart two classes that are correlated. Further, we show that
the proposed method achieves accuracy comparable to a vanilla
Tsetlin Machine approach but in approximately 42% less number
of epochs on average
-
Saha, Rupsa & Jyhne, Sander
(2022).
Interpretable Text Classification in Legal Contract Documents using Tsetlin Machines.
I Shafik, Rishad (Red.),
2022 International Symposium on the Tsetlin Machine (ISTM 2022).
IEEE conference proceedings.
ISSN 978-1-6654-7116-9.
s. 7–12.
doi:
10.1109/ISTM54910.2022.00011.
Vis sammendrag
Legal text contains various challenges in automated
processing, compounded by the lack of detailed resources available for them. However, the ability of process such texts automatically is highly sought after. In this paper we try to parse a
set of contract documents and identify key legal terminologies
present in them, with the help of four text processing methods
from different backgrounds : Tsetlin Machines, BERT, CNNBiLSTM and FastText. We show that the TM based approach
works at par with other popular methods, with the added benefit
of making available important clause literals that can act as
specific linguistic cues to legal terminology.
-
Abeyrathna, Kuruge Darshana; Bhattarai, Bimal; Goodwin, Morten; Gorji, Saeed Rahimi; Granmo, Ole-Christoffer & Lei, Jiao
[Vis alle 8 forfattere av denne artikkelen]
(2021).
Massively Parallel and Asynchronous Tsetlin Machine Architecture Supporting Almost Constant-Time Scaling.
Proceedings of Machine Learning Research (PMLR).
ISSN 2640-3498.
Fulltekst i vitenarkiv
Vis sammendrag
Using logical clauses to represent patterns, Tsetlin Machine (TM) have recently obtained competitive performance in terms of accuracy, memory footprint, energy, and learning speed on several benchmarks. Each TM clause votes for or against a particular class, with classification resolved using a majority vote. While the evaluation of clauses is fast, being based on binary operators, the voting makes it necessary to synchronize the clause evaluation, impeding parallelization. In this paper, we propose a novel scheme for desynchronizing the evaluation of clauses, eliminating the voting bottleneck. In brief, every clause runs in its own thread for massive native parallelism. For each training example, we keep track of the class votes obtained from the clauses in local voting tallies. The local voting tallies allow us to detach the processing of each clause from the rest of the clauses, supporting decentralized learning. This means that the TM most of the time will operate on outdated voting tallies. We evaluated the proposed parallelization across diverse learning tasks and it turns out that our decentralized TM learning algorithm copes well with working on outdated data, resulting in no significant loss in learning accuracy. Furthermore, we show that the approach provides up to 50 times faster learning. Finally, learning time is almost constant for reasonable clause amounts (employing from 20 to 7,000 clauses on a Tesla V100 GPU). For sufficiently large clause numbers, computation time increases approximately proportionally. Our parallel and asynchronous architecture thus allows processing of more massive datasets and operating with more clauses for higher accuracy.
-
-
Saha, Rupsa; Granmo, Ole-Christoffer & Goodwin, Morten
(2021).
Using Tsetlin Machine to discover interpretable rules in natural language processing applications.
Expert Systems.
ISSN 0266-4720.
doi:
10.1111/exsy.12873.
-
Saha, Rupsa; Granmo, Ole-Christoffer & Goodwin, Morten
(2020).
Mining Interpretable Rules for Sentiment and Semantic Relation Analysis Using Tsetlin Machines,
SGAI 2020: Artificial Intelligence XXXVII.
Springer.
ISSN 9783030637989.
doi:
10.1007/978-3-030-63799-6_5.
-
Holen, Martin; Saha, Rupsa; Goodwin, Morten; Omlin, Christian Walter Peter & Sandsmark, Knut Eivind
(2020).
Road Detection for Reinforcement Learning Based Autonomous Car,
ICISS 2020: Proceedings of the 2020 The 3rd International Conference on Information Science and System.
ACM Publications.
ISSN 978-1-4503-7725-6.
s. 67–71.
doi:
10.1145/3388176.3388199.
Vis sammendrag
Human mistakes in traffic often have terrible consequences. The long-awaited introduction of self-driving vehicles may solve many of the problems with traffic, but much research is still needed before cars are fully autonomous.
In this paper, we propose a new Road Detection algorithm using online supervised learning based on a Neural Network architecture. This algorithm is designed to support a Reinforcement Learning algorithm (for example, the standard Proximal Policy Optimization or PPO) by detecting when the car is in an adverse condition. Specifically, the PPO gets a penalty whenever the virtual automobile gets stuck or drives off the road with any of its four wheels.
Initial experiments show significantly improved results for PPO when using our Road Detection algorithm, as compared to not using any form of Road Detection.
In fact, without this detection algorithm, the vehicle often gets into non-terminating loops (for example, driving into the dividers, getting stuck, or driving into a pit).
Se alle arbeider i Cristin
Publisert
16. apr. 2024 10:48
- Sist endret
16. apr. 2024 10:48