Get Mystery Box with random crypto!

Towards NLP🇺🇦

Logo saluran telegram towards_nlp — Towards NLP🇺🇦 T
Logo saluran telegram towards_nlp — Towards NLP🇺🇦
Alamat saluran: @towards_nlp
Kategori: Teknologi
Bahasa: Bahasa Indonesia
Pelanggan: 1.36K
Deskripsi dari saluran

All ngrams about Natural Language Processing that are of interest to @iamdddaryna

Ratings & Reviews

2.50

2 reviews

Reviews can be left only by registered users. All reviews are moderated by admins.

5 stars

0

4 stars

0

3 stars

1

2 stars

1

1 stars

0


Pesan-pesan terbaru

2023-05-25 12:42:12 A PhD Student’s Perspective on Research in NLP in the Era of Very Large Language Models

As our IFAN project was recommended as one of the promising research direction, I will also recommend in return to read the recent paper to answer the question: "So what now in NLP research if ChatGPT is out?"
Spoiler: the world has not ended and we still have plenty work to do!

https://arxiv.org/abs/2305.12544

From my research work and what I also want to explore, my top list of research directions:

1. Misinformation fight. There is still zero online working automated fake news and propaganda detection systems. However, the risk of misinformation spread is increasing.
2. Multilingualism. A usual reminder, that there is more languages rather then English. Like at least 7k more.
3. Explainability and Interpretabilty. Do we trust models' decisions? Still absolutely far away from 100%. We can help to integrate these models into decisions making process only if their behavior will be transparent. And now think about if we can even explain every NLP task. The methods are absolutely different.
4. Less resources. Less memory to store models and fine-tune them. Less also data to learn! Do we need indeed all these training samples? Or we just need diverse enough data?
5. Human-NLP models interaction. What we can admit is that ChatGPT was the first NLP model used not only by specialists but by everyone. Because it is more or less pleasant and safe to use it. If the model cannot answer some input, it provides anyway nicely written answer. The wrapper is also extremely important. How we need to cover those models that user will be comfortable to work with it? What about children if we want to adjust them for education even from early ages?

Be brave, be creative, be inspired
481 views09:42
Buka / Bagaimana
2023-05-24 13:40:44 On the Impossible Safety of Large AI Models

The success hype of LLMs reached not only NLP-related field, but also get into life of normal humans professionals from a lot of different field. However, even I personally, have not seen any use-case where the model perform 100%, or 99.999%, or 99.9%... of the accuracy.

Theoretical proof that it is impossible to build arbitrarily accurate AI model:
https://arxiv.org/abs/2209.15259

Why? TL;DR:

* User-generated data: user-generated data are both mostly unverified and potentially highly sensitive;
* High-dimension memorization: what to achieve better score on more data? You need way more parameters. However, the contexts are limitless. So... we need infinite amount of parameters? The complexity of “fully satisfactory” language processing might be orders of magnitude larger than today’s LLMs, in which case we may still obtain greater accuracy with larger models.
* Highly heterogeneous users: the distribution of texts generated by a given user greatly diverges from the distribution of texts generated by another user. More data, more users, again, more contexts, more data which can be difficult to fully grasp and generalize.
* Sparse heavy-tailed data per user: even we take into account only one user, even their data is not so dense to be generalized. We should expect an especially large empirical heterogeneity in language data, as the samples we obtain from a user can completely stand out from the user’s language distribution.

As a result, LAIM training is unlikely to be easier than mean estimation. The usual objective for ML is to estimate a distribution which is assumed to be normal one where we want to estimate the mean. How many combinations of such distributions are we able to predict?

+ We need to find a balance between accuracy and privacy.

Pretty challenging task. Will we be able to solve it anyway?
514 views10:40
Buka / Bagaimana
2023-05-10 12:19:22 Language models can explain neurons in language models

What about to use GPT-4 to automatically write explanations for the behavior of neurons in large language models and to score those explanations?

* Explain: Generate an explanation of the neuron’s behavior by showing the explainer model (token, activation) pairs from the neuron’s responses to text excerpts.
* Simulate: Use the simulator model to simulate the neuron's activations based on the explanation.
* Score: Automatically score the explanation based on how well the simulated activations match the real activations.

Blog from Closed OpenAI: [link]
Paper: [link]
Code and collected dataset of explanations: [link]
781 views09:19
Buka / Bagaimana
2023-05-08 12:45:44 LLMs are everywhere: what other thoughts can we come up with?

This post is the list of alternative sources to read about LLMs and what changes they have brought:

* Choose Your Weapon: Survival Strategies for Depressed AI Academics "what should we do know when ChatGPT is here?" has asked probably every student/researcher in NLP academia. This statement paper can provide you several ideas why not to continue

* Closed AI Models Make Bad Baselines: We will see how many papers mentioning ChatGPT will appear this ACL. However, Closed models is not the way to do benchmarking in research.

* Towards Climate Awareness in NLP Research: together with the raise of data bases and size of models, our responsibility to the environment also increases. To do modern research, it is nice to report how much of computational time/resource/CO2 emissions were used.

* Step by Step Towards Sustainable AI: if you want to finalize your reading about responsible AI, I really recommend this issue of AlgorithmWatch issue. Professionals from HuggingFace and several German institutions are sharing their thoughts about at what key points we should pay attention to deploy AI safely to humanity and nature.
739 views09:45
Buka / Bagaimana
2023-04-26 12:26:20 Why text detoxification is important especially now?

Any of chat-bot is not safe of being toxic at some point (even ChatGPT!). So, if you want to have safe conversations with your users, it is still important to process toxic language.

With our text detoxification technology, you can:

* Before training your language model, chatbot, you can preprocess scrapped training data to ensure that there will be no toxicity. But, you should not just through away your samples. You can detoxify them! Then, the major part of the dataset will not be lost but the content will be saved.
* You can ensure that the user message is also non-toxic. Again, the replica will be saved. Now after detoxification, we will ensure that the conversation will not be transferred into unsafe tone.
* You can cross-save the answers from your chat-bot as well! The conversation will not be stopped even if you chat-bot generates something toxic. Its reply will be detoxified and the user will see neutral answer.

Check out all the info about our research and all models in this repo!
880 views09:26
Buka / Bagaimana
2023-04-26 12:26:13
706 views09:26
Buka / Bagaimana
2023-04-12 12:40:05 IFAN: An Explainability-Focused Interaction Framework for Humans and NLP Models

We talked before about different techniques how to explain ML and NLP models. Ok, we have explained some model for a specific output, highlighted some tokens there. What should happen next?

You can use humans to debug and improve your model! What your steps can be:
1. You identify misclassified samples (for instance, during hate speech detection, you have noticed that the model is biased against some target words).
2. You explain model's decisions and see that the models puts too much or too less weight/attention to some words.
3. You edit the explanation, i.e. corresponding weights of the words spans that should contribute to the correct label.
4. You do this for several samples and retrain Adapter layer of your model based on new samples.
5. Now your model's behavior is fixed, i.e. it is debiased!

All this can be done with our platform:
https://ifan.ml/

This is the first solid version, we are still developing many-many new features for it (as, for instance, the report page where you can control model's performance change). But already now, we believe that the platform can be a solid step to human-in-the-loop debugging of NLP models .

The corresponding paper about this first version [link]
879 viewsedited  09:40
Buka / Bagaimana
2023-04-12 12:39:59
632 views09:39
Buka / Bagaimana
2023-04-08 09:58:48 MLSS 2023

1 day till application is closed to the Machine Learning Summer School with application in Science!
https://mlss2023.mlinpl.org/

I personaly took part in MLSS 2020, even if it was virtual, I got so many insights. This year is in Krakow! Get a chance to listen to a lectures from world-famous speakers
771 views06:58
Buka / Bagaimana
2023-04-03 14:40:18
A Survey of Large Language Models

* General overview;
* Listing by the number of parameters;
* Commonly used corpora for training;
* How pre-training can be done;
* Typical architecture types;
* How to fine-tune;
* How to prompt;
* Task possible to solve;
* Evaluation setups;

A very comprehensive survey:
https://arxiv.org/abs/2303.18223
974 views11:40
Buka / Bagaimana