text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: RoBERTa on SuperGLUE's 'Choice of Plausible Alternatives' task
Body: COPA is one of the tasks of the [SuperGLUE](https://super.gluebenchmark.com) benchmark. The task is to re-trace the steps of Facebook's RoBERTa paper (https://arxiv.org/pdf/1907.11692.pdf) and build an AllenNLP config that reads the COPA data and fine-tunes a model on it. We expect scores in the range of their entry on the [SuperGLUE leaderboard](https://super.gluebenchmark.com/leaderboard).
This can be formulated as a multiple choice task, using the [`TransformerMCTransformerToolkit `](https://github.com/allenai/allennlp-models/blob/main/allennlp_models/mc/models/transformer_mc_tt.py#L14) model, analogous to the PIQA model. You can start with the [experiment config](https://github.com/allenai/allennlp-models/blob/main/training_config/tango/piqa.jsonnet) and [dataset reading step](https://github.com/allenai/allennlp-models/blob/main/allennlp_models/mc/tango/piqa.py#L13) from PIQA, and adapt them to your needs. | 0easy
|
Title: LightgbmObjective train() got an unexpected keyword argument 'verbose_eval'
Body: I'm using
`automl = AutoML(mode="Optuna")`
with a regression problem and get the following error:
```
## Error for 1_Optuna_LightGBM
No trials are completed yet.
Traceback (most recent call last):
File "C:\Users\james.pike\AppData\Roaming\Python\Python310\site-packages\supervised\base_automl.py", line 1195, in _fit
trained = self.train_model(params)
File "C:\Users\james.pike\AppData\Roaming\Python\Python310\site-packages\supervised\base_automl.py", line 401, in train_model
mf.train(results_path, model_subpath)
File "C:\Users\james.pike\AppData\Roaming\Python\Python310\site-packages\supervised\model_framework.py", line 223, in train
self.learner_params = optuna_tuner.optimize(
File "C:\Users\james.pike\AppData\Roaming\Python\Python310\site-packages\supervised\tuner\optuna\tuner.py", line 225, in optimize
best = study.best_params
File "C:\Users\james.pike\AppData\Roaming\Python\Python310\site-packages\optuna\study\study.py", line 117, in best_params
return self.best_trial.params
File "C:\Users\james.pike\AppData\Roaming\Python\Python310\site-packages\optuna\study\study.py", line 160, in best_trial
return copy.deepcopy(self._storage.get_best_trial(self._study_id))
File "C:\Users\james.pike\AppData\Roaming\Python\Python310\site-packages\optuna\storages\_in_memory.py", line 234, in get_best_trial
raise ValueError("No trials are completed yet.")
ValueError: No trials are completed yet.
```
On the command line after each attempt it says:
```
Exception in LightgbmObjective train() got an unexpected keyword argument 'verbose_eval'
[33m[W 2023-08-18 15:44:52,803][0m Trial 0 failed with parameters: {'learning_rate': 0.1, 'num_leaves': 1598, 'lambda_l1': 2.840098794801191e-06, 'lambda_l2': 3.0773599420974e-06, 'feature_fraction': 0.8613105322932351, 'bagging_fraction': 0.970697557159987, 'bagging_freq': 7, 'min_data_in_leaf': 36, 'extra_trees': False} because of the following error: The value None could not be cast to float..[0m
[33m[W 2023-08-18 15:44:52,806][0m Trial 0 failed with value None.[0m
``` | 0easy
|
Title: Good First Issue >> Price Relative Indicator
Body: **Describe the solution you'd like**
Price Relative Indicator
* Takes a _base_ and _other_ Series.
* kwarg: ```unit:bool = False```. When ```True```, toggles whether to have the series start from _one_.
* kwarg: ```invert:bool = False```. Toggles whether to invert or take the reciprocal of the result.
**Describe alternatives you've considered**
[Price Relative Indicator](https://www.tradingview.com/script/MYSXTMq2-Price-Relative-Relative-Strength/)
**Which version are you running?** The lastest.
```sh
$ pip install -U git+https://github.com/twopirllc/pandas-ta
```
**Is your feature request related to a problem? Please describe.**
Nope
**Additional context**
Good first issue. | 0easy
|
Title: Write a BlogPost featuring cleanlab
Body: Blog posts are extremely helpful to helping us explore new applications of cleanlab and introduce the tool to more data scientists. You're of course free to publish your blogpost wherever you like (eg. Medium, Towards Data Science, reddit/learnmachinelearning, Kaggle forums, etc). If interested, feel free to discuss ideas with us and get feedback in our community slack channel: https://cleanlab.ai/slack (can message Jonas Mueller if you wish to discuss 1:1)
An ideal blogpost will be a Jupyter notebook that highlights an interesting application of cleanlab on real or simulated data.
Some ideas for topics include:
0) use cleanlab to find errors in a popular ML benchmark dataset. See: labelerrors.com for inspiration. There are many famous datasets which nobody has tried running through cleanlab yet.
1) use cleanlab for multi-label classification task such as image or text tagging (where multiple labels can apply to each example). If image tagging, we recommend using with pytorch-vision or timm package. If text tagging, we recommend using with huggingface package. But many other image/text modeling packages are also great to explore!
2) use cleanlab as part of a solution for a Kaggle competition, or MachineHack competition, or data-centric AI competition like [DataPerf](https://dataperf.org/challenges.html). Note many of these competitions have forums/notebook/kernel sections where you can publish your code to help others.
3) investigate which types of models as best for using with cleanlab to find (simulated) label errors in classification with tabular data (eg. XGBoost vs LightGBM vs CatBoost vs RandomForest vs NeuralNet vs SVM)
4) use cleanlab with NLP model to find label errors in interesting text classification dataset, such as classifying whether pairs of sentences are paraphrases or not, or whether an annotated question-answer pair is correct or not.
5) use cleanlab in less-standard ML task that is not exactly classification but similar, such as: metric learning (or contrastive learning), information retrieval, object detection, classification with data labeled by multiple annotators, etc.
| 0easy
|
Title: Raise test coverage above 90% for giotto/mapper/cover.py
Body: Current test coverage from pytest is 57% | 0easy
|
Title: Add ability to change front end port tcp:3000 to something else
Body: I haven't customized a windows machine much but run into an error when trying to start up the assistant locally. Meaning, Open-Assistant is trying to use a port is in use by a system process.
```
OS Name Microsoft Windows 11 Pro
Version 10.0.22621 Build 22621
```
```
PS C:\Users\kylem\Development\Open-Assistant> docker compose --profile ci up --build --attach-dependencies
...
Attaching to open-assistant-backend-1, open-assistant-backend-worker-1, open-assistant-backend-worker-beat-1, open-assistant-db-1, open-assistant-maildev-1, open-assistant-redis-1, open-assistant-web-1, open-assistant-webdb-1
Error response from daemon: driver failed programming external connectivity on endpoint open-assistant-web-1 (ccce9108f9ee2bfe52bc2f6a77028b76e2a08935906fbd345164d1f727215a17): listen tcp4 0.0.0.0:3000: bind: address already in use
```
```
> netstat -ano
TCP 0.0.0.0:3000 0.0.0.0:0 LISTENING 4788
```
the `4788` process listening on port 3000 is https://en.wikipedia.org/wiki/Svchost.exe.
## My quick work around
I searched the repository and changed any instance of 3000 (that was specific to networks) to 3239. The 3239 number was chosen by random human fingers.
```
open-assistant-web-1 | Listening on port 3239 url: http://localhost:3239
```
It works 👍 . I'm not suggesting this as the solution, but this repository could use an environment variable or some other mechanism to change the number when there is a conflict. | 0easy
|
Title: Add an easy option to log to terminal and a file
Body: the logging tutorial suggests using `>` for redirection but then we lose terminal logs. we should modify the CLI so users can send the logs to a file. e.g.,
```
# set log level and log to the terminal
ploomber build --log info --log-file my.log
```
```
# set log level, adding --log-file logs to the terminal and to a file
ploomber build --log info --log-file my.log
```
relevant file: https://github.com/ploomber/ploomber/blob/09e4b3ad650c66c45814222678afc954b76d1536/src/ploomber/cli/parsers.py#L57 | 0easy
|
Title: Test works locally but fails in CI when comparing a GIF image
Body: **What's your question?**
I've read through all the docs. When I run my test on my local machine it passes - the snapshot works. When the test are run in the CI the snapshot fails when comparing the GIF image that is a part of the object being compared. I can clearly see the difference in the data of the two images. Anyhow, the following seems relevant but I'm unsure how to use it (or why local vs CI makes a difference).
https://github.com/syrupy-project/syrupy/blob/main/tests/examples/test_custom_image_extension.py
In the class that I'm trying to compare one of the attributes is `image` and it contains binary GIF data.
Any pointers to what I should be doing different are much appreciated!
Here is the failed CI run: https://github.com/gwww/env_canada/actions/runs/13314532366/job/37185188114
Here is the assert that failed: https://github.com/gwww/env_canada/blob/radar-love/tests/ec_radar_test.py#L83
Happy to provide more breadcrumbs!
Thanks in advance! | 0easy
|
Title: Make EvaluationRunResult work without pandas
Body: Refactor `EvaluationRunResult` to run without depending on pandas Dataframe.
- Add an option to make the return output a `.csv` instead of dataframes
- Make a LazyImport for `pandas.Dataframe`, which should be triggered if the functions are to output results in data frames
- Remove the dependency on the base class `BaseEvaluationRunResult`
| 0easy
|
Title: Relative strength
Body: Hi , is it possible for you to add relative strength? The only thing i found was relative strength and not relative strength. Here is a few links about relative strength.
[Relative Strength]
https://www.investopedia.com/terms/r/relativestrength.asp
[Relative Strength vs Relative Strength Index]
https://advisoranalyst.com/2016/11/01/relative-strength-vs-relative-strength-index-whats-the-difference.html/ | 0easy
|
Title: name_format如果想支持 create_timestamp需要怎么改呀
Body: 我目前在做数据集,希望文件名是 20231212_{id},如果使用 id的话显得有点长,于是我想改用 create_timestamp,请问大佬需要改哪些代码,谢谢
"name_format": "create_time create_timestamp",
"date_format": "%Y%m%d",
"split": "_",
| 0easy
|
Title: Parsing model: Rename `Return` to `ReturnSetting` and `ReturnStatement` to `Return`
Body: The deprecated `[Return]` setting is currently represented in the parsing model using a `Return` class and the `RETURN` statement is represented as `ReturnStatement`. For consistency with other control structures, it's better to change this so that `Return` represents `RETURN` and `[Return]` is represented by `ReturnSetting`. We already added `ReturnSetting` as a forwards compatible alias for `Return` in RF 6.1 (#4656) exactly for this purpose. The `ModelVisitor` base class also support `visit_ReturnSetting` already now.
In practice this is what needs to be done:
- Rename `Return` to `ReturnSetting` and remove the current `ReturnSetting` alias.
- Rename `ReturnStatement` to `Return`.
- Add `ReturnStatement` alias for backwards compatibility.
- Add support for `visit_ReturnStatement` to the `ModelVisitor`. It will be called with `Return` nodes if `visit_Return` is missing.
This is a backwards incompatible change and tools working with the parsing model need to be updated. When using `ModelVisitor`, the following code ought to work with all versions. If only RF 6.1 or newer needs to be supported, `visit_Return` isn't needed because Robot will automatically call appropriate `visit_ReturnXxx` method.
```python
from robot import __version__ as robot_version
from robot.api.parsing import ModelVisitor
RF7_OR_NEWER = int(robot_version.split('.')[0]) >= 7
class Example(ModelVisitor):
def visit_Return(self, node):
if RF7_OR_NEWER:
self.visit_ReturnStatement(node)
else:
self.visit_ReturnSetting(node)
def visit_ReturnStatement(self, node):
...
def visit_ReturnSetting(self, node):
...
``` | 0easy
|
Title: Remote: Enhance `datetime`, `date` and `timedelta` conversion
Body: XML-RPC used by the Remote API supports only [some data types](https://en.wikipedia.org/wiki/XML-RPC#Data_types). Currently the Remote library converts everything it doesn't recognize to strings, but there are several issues:
- Some types, at least `datetime`, are converted to strings even though XML-RPC actually supports them natively.
- Some types are converted to strings even though some other type would be better. For example, `timedelta` should probably be converted to a float by using `timedelta.total_seconds()`. Also `date` could be converted to `datetime` that then would be handled automatically.
- It's possible that with some types we should do some more formatting than just call `str()`.
The main motivation for this issues is making it easier for remote servers to convert values not supported by XML-RPC back to appropriate types. As an example, see robotframework/PythonRemoteServer#84.
In practice we need to go through all types that Robot's argument conversion supports and see what's the best way to handle them. | 0easy
|
Title: Incorrect type hint for WebClient#files_upload file argument
Body: After #771 was merged, the `files_upload` function signature still does not accept a `bytes` type object: https://github.com/slackapi/python-slack-sdk/blob/main/slack_sdk/web/client.py#L1458-L1460
```
def files_upload(
self, *, file: Union[str, IOBase] = None, content: str = None, **kwargs
) -> SlackResponse:
```
So the OP example produces a mypy error: `Argument "file" to "files_upload" of "WebClient" has incompatible type "bytes"; expected "Union[str, IOBase, None]"`
_Originally posted by @gburek-fastly in https://github.com/slackapi/python-slack-sdk/issues/770#issuecomment-749221824_ | 0easy
|
Title: [UI] Empty Accelerator should raise an issue
Body: <!-- Describe the bug report / feature request here -->
```
resources:
accelerators: # Fill in the accelerator type
cpus: 16+
memory: 32+
```
| 0easy
|
Title: Rewrite Ray Tune Hyperparameter optimization to use current Tuner method
Body: Ray Tune switched from ```tune.run()``` to ```tune.Tuner.fit()``` a few years ago as part of a reorganization of the library's structure, with the Tuner now being the standard across their documentation for hyperparameter optimization.
The darts library [user guide page on this](https://unit8co.github.io/darts/userguide/hyperparameter_optimization.html) should be updated or added to, to reflect the changes in ray.
I've personally been having trouble getting ray and darts to work how I'd like, but if I get it to work, I'll clean up my code and suggest an edit to the page! If anyone has code that they've written that uses Tuner with darts and would like to share, I'd love to take a look at it. | 0easy
|
Title: Add option to show percentage or number in Class Prediction Error Plot y-axis
Body: **Class Prediction Error Plot** shows number of predicted classes on y axis, this chart will not be very useful if we have miss classification means very less number of instances for one of the classes.
If y-axis shows % of predicted classes that may solve this issue. | 0easy
|
Title: Fix field test on MDV
Body: Seems the unit test for testing field data was named test_elevation
https://github.com/ARM-DOE/pyart/blob/master/pyart/io/tests/test_mdv_radar.py#L341
Just need a simple function name change | 0easy
|
Title: Create JSONField which wraps implementation of django.contrib.postgres and jsonfield
Body: **djangoSHOP** currently uses the `JSONField` from https://pypi.python.org/pypi/django-jsonfield which is portable and the only available choice on Django-1.8.
Now that Django ships with its own `JSONField`, we should use that. Unfortunately it is specific to Postgres only, hence we need a mechanism which wraps the `JSONField` and only uses the Postgres specific one, if we know for sure, that the database runs on Postgres, otherwise it uses the other one.
Since all occurrences of `JSONField` have `default=dict`, this default shall be integrated into that wrapping field. This field shall be implemented in `shop.models.fields.py`.
| 0easy
|
Title: [LineZone] - allow per class counting
Body: ### Description
Currently, [sv.LineZone](https://github.com/roboflow/supervision/blob/3024ddca83ad837651e59d040e2a5ac5b2b4f00f/supervision/detection/line_counter.py#L11) provides only aggregated counts - all classes are thrown into one bucket. In the past, many users have asked us to provide more granular - per class count. This can be achieved by adding `class_in_count` and `class_out_count` dictionaries that will store per-class counts.
### API
```python
class LineZone:
def __init__(
self,
start: Point,
end: Point,
triggering_anchors: Iterable[Position] = (
Position.TOP_LEFT,
Position.TOP_RIGHT,
Position.BOTTOM_LEFT,
Position.BOTTOM_RIGHT,
),
):
# Existing initialization code...
self.class_in_count: Dict[int, int] = {}
self.class_out_count: Dict[int, int] = {}
def trigger(self, detections: Detections) -> Tuple[np.ndarray, np.ndarray]:
crossed_in = np.full(len(detections), False)
crossed_out = np.full(len(detections), False)
# Required logic changes...
```
### Additional
- Note: Please share a Google Colab with minimal code to test the new feature. We know it's additional work, but it will definitely speed up the review process. Each change must be tested by the reviewer. Setting up a local environment to do this is time-consuming. Please ensure that Google Colab can be accessed without any issues (make it public). Thank you! 🙏🏻 | 0easy
|
Title: Check if return_object is working in the RareLabelEncoder
Body: Add this to tests. | 0easy
|
Title: Filter test suite warnings output by wsgiref to non-standard HTTP methods.
Body: | 0easy
|
Title: [Dev] Automatically source the sky environment for dev mode
Body: <!-- Describe the bug report / feature request here -->
It is a little bit tedious to manually `source ~/skypilot-runtime/bin/activate` for dev on sky serve/managed jobs. Can we support automatically activate it for dev mode (e.g. `SKYPILOT_DEV=1`)?
<!-- If relevant, fill in versioning info to help us troubleshoot -->
_Version & Commit info:_
* `sky -v`: PLEASE_FILL_IN
* `sky -c`: PLEASE_FILL_IN
| 0easy
|
Title: Minor fix in logger for chart
Body: ### System Info
version v1.5.17
### 🐛 Describe the bug
In method execute_code there is a call method add_save_chart when temporary chart is replaced by provided path.
Issue that it show message in the log always, even if I have nothing about chart in my query.
It's because no checking if we need to replace path or not:
```
code = code.replace("temp_chart.png", save_charts_file.as_posix())
logger.log(f"Saving charts to {save_charts_file}")
```
I guess if should be something like (but I'm not sure here):
```
if "temp_chart.png" in code:
code = code.replace("temp_chart.png", save_charts_file.as_posix())
logger.log(f"Saving charts to {save_charts_file}")
```
Now we can have message in log only when we have somehing about chart.
Also maybe better to have "temp_chart.png" as const in some place.
There is another way to fix it - just replace message to "New place of chart is xxx" because in fact we do not save chart in this code, we only replace path.
| 0easy
|
Title: Add example of amp training & qunatization-aware training
Body: | 0easy
|
Title: 30000 !
Body: :tada: :birthday: :tada: | 0easy
|
Title: AIP-38 | Add Dag Warning banner to Dag Details page
Body: ### Body
Use the `public/dagWarnings` API endpoint to see if there are any warnings for a dag_id and display a banner on that Dag's detail page
### Committer
- [x] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | 0easy
|
Title: Configurable Spacing After ":" in JSON via "Customize JSON Output" addon
Body: ### Describe the problem
My files don't have a space, but Weblate reformats the entire file by adding one.
```diff
-"in_1_mile":"In einer Meile",
-"in_1_5_miles":"In anderthalb Meilen",
-"in_2_miles":"In zwei Meilen",
-"unknown_camera":"Blitzer voraus"
+"in_1_mile": "Za jednu míli",
+"in_1_5_miles": "Za jednu a půl míle",
+"in_2_miles": "Za dvě míle",
+"unknown_camera": "Radar před vámi"
```
### Describe the solution you would like
Could we add an option to the "Customize JSON output" addon to control spacing after the ":"?
### Describe alternatives you have considered
_No response_
### Screenshots
_No response_
### Additional context
_No response_ | 0easy
|
Title: provide support for other input format
Body:
### Description
In the future, igel should support multiple dataset format other than csv. Maybe add support for excel, json and sql tables.
It would be great if users have the flexibility of providing their datasets in other formats and not only csv.
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
If you want to work on this, please consider using pandas because It's already a dependency in igel. Not adding unnecessary dependencies is important, so considering check it out whether this can be achieved using pandas. Otherwise, we can discuss adding an additional library in the comments
| 0easy
|
Title: feature: default call_name for `broker.subscriber` if there no any subscriber function
Body: Now **AsyncAPI** raises an error if you try to generate schema with subscriber without any handler function. We should add a default name for that case. | 0easy
|
Title: Refactor all tests to use in-memory notebooks instead of ipynb files
Body: | 0easy
|
Title: written error
Body: https://github.com/ARM-DOE/pyart/blob/dbe4d70eccc1b88260d44720cf133694d127df0a/pyart/graph/radarmapdisplay.py#L264
width = (x.max() - x.min()) * 1000. | 0easy
|
Title: [bug] Error handling on dynaconf get CLI
Body: **Describe the bug**
When using get to read a non existent variable it outputs the python traceback to the CLI
**To Reproduce**
```console
$ dynaconf get DONTEXIST
Traceback (most recent call last):
File "/venv/bin/dynaconf", line 8, in <module>
sys.exit(main())
File "/src/dynaconf/dynaconf/vendor/click/core.py", line 857, in __call__
return self.main(*args, **kwargs)
File "/src/dynaconf/dynaconf/vendor/click/core.py", line 810, in main
rv = self.invoke(ctx)
File "/src/dynaconf/dynaconf/vendor/click/core.py", line 1292, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/src/dynaconf/dynaconf/vendor/click/core.py", line 1099, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/src/dynaconf/dynaconf/vendor/click/core.py", line 613, in invoke
return callback(*args, **kwargs)
File "/src/dynaconf/dynaconf/cli.py", line 476, in get
result = settings[key] # let the keyerror raises
File "/src/dynaconf/dynaconf/utils/functional.py", line 19, in inner
return func(self._wrapped, *args)
File "/src/dynaconf/dynaconf/base.py", line 315, in __getitem__
raise KeyError(f"{item} does not exist")
KeyError: 'DONTEXIST does not exist'
```
**Expected behavior**
```console
$ dynaconf get DONTEXIST
Key not found. # stderr
$ echo $? # proper retcode
1
```
The same behavior as we have in `list`
```console
$ dynaconf list -k DONTEXIST
Django app detected
Working in development environment
Key not found
```
| 0easy
|
Title: 建议新功能:通过HTTP API提卡
Body: 个人有需求希望增加HTTP API请求提卡,类似于彩虹发卡。
建议功能可以选择GET或POST提卡,增加选项是否将返回内容作为卡密。
URL请求中带订单参数的模板,如:
http://127.0.0.1:8080?orderId=${oid}&price=${price}&kind=${shopkind} | 0easy
|
Title: Raise test coverage above 90% for gtda/mapper/cluster.py
Body: Current test coverage from pytest is 57% | 0easy
|
Title: Threading/Async for OpenAI requests
Body: As multiple users use the bot, we need to be able to thread/make asynchronous the OpenAI API requests so that the bot's main thread doesn't get blocked every time it works on a request. Currently, the main thread gets blocked and if many people use multiple commands consecutively, the bot will block for a long time. At the very least and likely an easier approach: some kind of non-blocking queue for requests should be implemented instead of attempting to respond to the user immediately. This is probably a good approach to start with as well.
| 0easy
|
Title: Provide an option to pass DB_SSLMODE for postgres
Body: /kind bug
**What steps did you take and what happened:**
[A clear and concise description of what the bug is.]
We are trying to connect Katib to Azure flexible postgres database, but the connection fails due to disabled ssl mode https://github.com/kubeflow/katib/blob/f4c8861c810cc3088beebe9578932151850eafd1/pkg/db/v1beta1/postgres/postgres.go#L53
**What did you expect to happen:**
There should be a dedicated variable which can be used to pass the required value. In our case, Ssl Mode=Require;
**Anything else you would like to add:**
[Miscellaneous information that will assist in solving the issue.]
**Environment:**
Nonprod/prod
- Katib version (check the Katib controller image version): katib-controller:v0.14.0
- Kubernetes version: (`kubectl version`): v1.26.3
- OS (`uname -a`): Ubuntu
---
<!-- Don't delete this message to encourage users to support your issue! -->
Impacted by this bug? Give it a 👍 We prioritize the issues with the most 👍
| 0easy
|
Title: Add a way for "ploomber task" cli to print task metadata
Body: Sometimes I want to take a look at the metadata stored from a given task (like the source code, params, last time it ran), but the CLI doesn't offer an easy way to do this, so I'm thinking:
```sh
ploomber task {task-name} --metadata
```
Should print:
* source code stored in metadata
* params
* timestamp
Relevant files:
https://github.com/ploomber/ploomber/blob/master/src/ploomber/cli/task.py
https://github.com/ploomber/ploomber/blob/master/src/ploomber/products/metadata.py
| 0easy
|
Title: [BUG] Index-based colormodes raise `ZeroDivisionError` for single-trace plots
Body: All index-based colormodes raise a `ZeroDivisionError` when plotting a ridgeline plot with either a single trace or a single trace per row.
The index-based colormodes are:
```python
colormodes = [
"row-index",
"trace-index",
"trace-index-row-wise",
]
```
MVE:
```pycon
>>> from ridgeplot import ridgeplot
>>> ridgeplot(samples=[[1, 2, 2, 3]], colormode=colormodes[0])
Traceback (most recent call last):
...
File "[...]/ridgeplot/_color/interpolation.py", line 68, in _interpolate_row_index
[((ctx.n_rows - 1) - ith_row) / (ctx.n_rows - 1)] * len(row)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~
ZeroDivisionError: division by zero
```
| 0easy
|
Title: add action to post binder link on each PR
Body: we want to create a github action that posts a link to binder whenever someone opens a PR, this will allow us to easily test things
some notes:
1. since this requires a github token, I believe the trigger should be pull_request_target
2. the host should be binder.ploomber.io
3. we also need to add an environment.yml, similar to [this one](https://github.com/ploomber/jupysql/blob/master/environment.yml)
4. and a [postBuild](https://github.com/ploomber/jupysql/blob/master/postBuild) like this
more information
https://mybinder.readthedocs.io/en/latest/
https://mybinder.org/
some related code:
https://github.com/jupyterlab/maintainer-tools/blob/main/.github/actions/binder-link/action.yml | 0easy
|
Title: TypeError("cannot convert 'Repr' object to bytes") on failed file snapshot
Body: **Describe the bug**
`snapshot = <[TypeError("cannot convert 'Repr' object to bytes") raised in repr()] SnapshotAssertion object at 0x20ebc456220>`
**To reproduce**
```
@pytest.fixture
def snapshot(snapshot):
return snapshot.use_extension(SingleFileSnapshotExtension)
def test_issue(snapshot):
b = b'\x01\x01\x01\x01\x01\x00\x01\x01\x01\x01'
assert b == snapshot
```
Run the test without creating snapshot file (or create snapshot file, and change the byte string so the assertion fails)
Interestingly, if don't use the custom fixture, and instead use `assert b == snapshot.use_extension(SingleFileSnapshotExtension)` then we don't get an error
**Expected behavior**
No exception
**Environment (please complete the following information):**
- OS: [Windows 10]
- Syrupy Version: [3.0.2]
- Python Version: [3.9]
| 0easy
|
Title: Update a component
Body: The components part of our codebase was written sometime ago, with older sklearn versions and before python typing was production ready.
In general, some of these files need to be cleaned up. Mostly typing of parameters and functions, adding documentation a bout these parameters and finally double checking with scikit learn that there aren't some new or deprecated parameters we still use.
To go the extra mile, some standalone tests that tests these components would be useful. At the moment we have general tests that test all components but specific tests for each component to go along side would help with regard to two other long term issues, namely
* #1350
* #1351
We would appreciate anyone who would like to take a component and help upgrade auto-sklearn to a more modern python :) Please see the [contribution guide](https://github.com/automl/auto-sklearn/blob/master/CONTRIBUTING.md) if you'd like to get started | 0easy
|
Title: Add $RANDOM
Body: Hello! I have an idea!
Under bash, zsh, and probably others, if you type "echo $RANDOM" you get a random number. It is a different random number each time it is echoed.
I thought this should be a thing in Xonsh because this post from It's FOSS on Mastodon treats it as a given that "echo $RANDOM" on the (unnamed) CLI will have this effect, but this is not the case for Xonsh.
https://infosec.exchange/deck/@itsfoss@mastodon.social/111277379265186528
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| 0easy
|
Title: Add support for the Parsel JMESPath feature
Body: See https://github.com/scrapy/parsel/pull/181 and https://github.com/Gallaecio/scrapy/commit/d642881e25b6a7ce7d8f640b354459b9a4921790 linked there.
Note that the minimum version of Parsel that Scrapy requires should stay the same, i.e. no need to update the minimum version of Parsel in setup.py. | 0easy
|
Title: Add anchor link for each request in the report to make it easily shareable
Body: It would be nice to have [anchor links](https://visualcomposer.com/blog/what-is-anchor-link/) available at each request in the report.
To share a specific request that needs review would be easier than having to scan (pun intended) through the entire report. | 0easy
|
Title: Update examples.ipynb to include new Visualizers
Body: I noticed that the examples.ipynb was out-of-date.
Here are the things that need to be changed/updated.
- [ ] Include new visualizers
- Class Prediction Error
- CVScores
- Manifold
- Feature Importance
- Recursive Feature Elimination
- Classification Report w/ support
- Discrimination Threshold
- Validation Curve
- Learning Curve)
- [x] Remove deprecated Visualizers (e.g ScatterVisualizer)
- [ ] When the visualizers are executed some of them raise warnings. For aesthetics of the notebook, we need to turn off all warnings. See example code below
```
import warnings
warnings.filterwarnings('ignore')
```
- [x] Fix issue #436 as it partially addresses some of these concerns.
@DistrictDataLabs/team-oz-maintainers
| 0easy
|
Title: Tox hangs after build error
Body: ## Issue
Attempting to run tests on the requests-mock project, I ran `tox`. Fairly quickly an error was emitted, but my shell prompt did not return. I waited some seconds then hit Ctrl+C to cancel whatever tox was doing. It emitted a notice about "Teardown Started" but still was unresponsive. I hit another Ctrl+C but with no luck.
## Environment
Provide at least:
- OS: macOS 13.1
- `pip list` of the host Python where `tox` is installed:
```console
Package Version
------------- -------
cachetools 5.2.0
chardet 5.1.0
colorama 0.4.6
distlib 0.3.6
filelock 3.8.2
packaging 22.0
pip 22.3.1
platformdirs 2.6.0
pluggy 1.0.0
pyproject_api 1.2.1
setuptools 65.6.3
tox 4.0.16
virtualenv 20.17.1
wheel 0.38.4
```
## Output of running tox
Ran without `-rvv`:
```console
requests-mock bugfix-17 $ tox
python: install_deps> python -I -m pip install pbr -r /Users/jaraco/code/jamielennox/requests-mock/requirements.txt -r /Users/jaraco/code/jamielennox/requests-mock/test-requirements.txt
.pkg: install_requires> python -I -m pip install 'setuptools>=40.8.0' wheel
.pkg: _optional_hooks> python /Users/jaraco/.local/pipx/venvs/tox/lib/python3.11/site-packages/pyproject_api/_backend.py True setuptools.build_meta __legacy__
.pkg: get_requires_for_build_sdist> python /Users/jaraco/.local/pipx/venvs/tox/lib/python3.11/site-packages/pyproject_api/_backend.py True setuptools.build_meta __legacy__
.pkg: install_requires_for_build_sdist> python -I -m pip install pbr
.pkg: prepare_metadata_for_build_wheel> python /Users/jaraco/.local/pipx/venvs/tox/lib/python3.11/site-packages/pyproject_api/_backend.py True setuptools.build_meta __legacy__
.pkg: build_sdist> python /Users/jaraco/.local/pipx/venvs/tox/lib/python3.11/site-packages/pyproject_api/_backend.py True setuptools.build_meta __legacy__
python: packaging backend failed (code=error: Multiple top-level packages discovered in a flat-layout: ['releasenotes', 'requests_mock'].
To avoid accidental inclusion of unwanted files or directories,
setuptools will not proceed with this build.
If you are trying to create a single distribution with multiple packages
on purpose, you should not rely on automatic discovery.
Instead, consider the following options:
1. set up custom discovery (`find` directive with `include` or `exclude`)
2. use a `src-layout`
3. explicitly set `py_modules` or `packages` with a list of names
To find more information, look for "package discovery" on setuptools docs.), with SystemExit: error: Multiple top-level packages discovered in a flat-layout: ['releasenotes', 'requests_mock'].
To avoid accidental inclusion of unwanted files or directories,
setuptools will not proceed with this build.
If you are trying to create a single distribution with multiple packages
on purpose, you should not rely on automatic discovery.
Instead, consider the following options:
1. set up custom discovery (`find` directive with `include` or `exclude`)
2. use a `src-layout`
3. explicitly set `py_modules` or `packages` with a list of names
To find more information, look for "package discovery" on setuptools docs.
/Users/jaraco/code/jamielennox/requests-mock/.tox/.pkg/lib/python3.11/site-packages/setuptools/config/setupcfg.py:508: SetuptoolsDeprecationWarning: The license_file parameter is deprecated, use license_files instead.
warnings.warn(msg, warning_class)
Traceback (most recent call last):
File "/Users/jaraco/code/jamielennox/requests-mock/.tox/.pkg/lib/python3.11/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
dist.run_commands()
File "/Users/jaraco/code/jamielennox/requests-mock/.tox/.pkg/lib/python3.11/site-packages/setuptools/_distutils/dist.py", line 969, in run_commands
self.run_command(cmd)
File "/Users/jaraco/code/jamielennox/requests-mock/.tox/.pkg/lib/python3.11/site-packages/setuptools/dist.py", line 1204, in run_command
self.set_defaults()
File "/Users/jaraco/code/jamielennox/requests-mock/.tox/.pkg/lib/python3.11/site-packages/setuptools/discovery.py", line 340, in __call__
self._analyse_package_layout(ignore_ext_modules)
File "/Users/jaraco/code/jamielennox/requests-mock/.tox/.pkg/lib/python3.11/site-packages/setuptools/discovery.py", line 373, in _analyse_package_layout
or self._analyse_flat_layout()
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jaraco/code/jamielennox/requests-mock/.tox/.pkg/lib/python3.11/site-packages/setuptools/discovery.py", line 430, in _analyse_flat_layout
return self._analyse_flat_packages() or self._analyse_flat_modules()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jaraco/code/jamielennox/requests-mock/.tox/.pkg/lib/python3.11/site-packages/setuptools/discovery.py", line 436, in _analyse_flat_packages
self._ensure_no_accidental_inclusion(top_level, "packages")
File "/Users/jaraco/code/jamielennox/requests-mock/.tox/.pkg/lib/python3.11/site-packages/setuptools/discovery.py", line 466, in _ensure_no_accidental_inclusion
raise PackageDiscoveryError(cleandoc(msg))
setuptools.errors.PackageDiscoveryError: Multiple top-level packages discovered in a flat-layout: ['releasenotes', 'requests_mock'].
To avoid accidental inclusion of unwanted files or directories,
setuptools will not proceed with this build.
If you are trying to create a single distribution with multiple packages
on purpose, you should not rely on automatic discovery.
Instead, consider the following options:
1. set up custom discovery (`find` directive with `include` or `exclude`)
2. use a `src-layout`
3. explicitly set `py_modules` or `packages` with a list of names
To find more information, look for "package discovery" on setuptools docs.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/jaraco/.local/pipx/venvs/tox/lib/python3.11/site-packages/pyproject_api/_backend.py", line 90, in run
outcome = backend_proxy(parsed_message["cmd"], **parsed_message["kwargs"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jaraco/.local/pipx/venvs/tox/lib/python3.11/site-packages/pyproject_api/_backend.py", line 32, in __call__
return getattr(on_object, name)(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jaraco/code/jamielennox/requests-mock/.tox/.pkg/lib/python3.11/site-packages/setuptools/build_meta.py", line 417, in build_sdist
return self._build_with_temp_dir(['sdist', '--formats', 'gztar'],
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jaraco/code/jamielennox/requests-mock/.tox/.pkg/lib/python3.11/site-packages/setuptools/build_meta.py", line 398, in _build_with_temp_dir
self.run_setup()
File "/Users/jaraco/code/jamielennox/requests-mock/.tox/.pkg/lib/python3.11/site-packages/setuptools/build_meta.py", line 485, in run_setup
self).run_setup(setup_script=setup_script)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jaraco/code/jamielennox/requests-mock/.tox/.pkg/lib/python3.11/site-packages/setuptools/build_meta.py", line 335, in run_setup
exec(code, locals())
File "<string>", line 5, in <module>
File "/Users/jaraco/code/jamielennox/requests-mock/.tox/.pkg/lib/python3.11/site-packages/setuptools/__init__.py", line 87, in setup
return distutils.core.setup(**attrs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jaraco/code/jamielennox/requests-mock/.tox/.pkg/lib/python3.11/site-packages/setuptools/_distutils/core.py", line 185, in setup
return run_commands(dist)
^^^^^^^^^^^^^^^^^^
File "/Users/jaraco/code/jamielennox/requests-mock/.tox/.pkg/lib/python3.11/site-packages/setuptools/_distutils/core.py", line 215, in run_commands
raise SystemExit("error: " + str(msg))
SystemExit: error: Multiple top-level packages discovered in a flat-layout: ['releasenotes', 'requests_mock'].
To avoid accidental inclusion of unwanted files or directories,
setuptools will not proceed with this build.
If you are trying to create a single distribution with multiple packages
on purpose, you should not rely on automatic discovery.
Instead, consider the following options:
1. set up custom discovery (`find` directive with `include` or `exclude`)
2. use a `src-layout`
3. explicitly set `py_modules` or `packages` with a list of names
To find more information, look for "package discovery" on setuptools docs.
Backend: run command build_sdist with args {'sdist_directory': '/Users/jaraco/code/jamielennox/requests-mock/.tox/.pkg/dist', 'config_settings': None}
Backend: Wrote response {'code': 'error: Multiple top-level packages discovered in a flat-layout: [\'releasenotes\', \'requests_mock\'].\n\nTo avoid accidental inclusion of unwanted files or directories,\nsetuptools will not proceed with this build.\n\nIf you are trying to create a single distribution with multiple packages\non purpose, you should not rely on automatic discovery.\nInstead, consider the following options:\n\n1. set up custom discovery (`find` directive with `include` or `exclude`)\n2. use a `src-layout`\n3. explicitly set `py_modules` or `packages` with a list of names\n\nTo find more information, look for "package discovery" on setuptools docs.', 'exc_type': 'SystemExit', 'exc_msg': 'error: Multiple top-level packages discovered in a flat-layout: [\'releasenotes\', \'requests_mock\'].\n\nTo avoid accidental inclusion of unwanted files or directories,\nsetuptools will not proceed with this build.\n\nIf you are trying to create a single distribution with multiple packages\non purpose, you should not rely on automatic discovery.\nInstead, consider the following options:\n\n1. set up custom discovery (`find` directive with `include` or `exclude`)\n2. use a `src-layout`\n3. explicitly set `py_modules` or `packages` with a list of names\n\nTo find more information, look for "package discovery" on setuptools docs.'} to /var/folders/sx/n5gkrgfx6zd91ymxr2sr9wvw00n8zm/T/pep517_build_sdist-4ckv1cgg.json
.pkg: _exit> python /Users/jaraco/.local/pipx/venvs/tox/lib/python3.11/site-packages/pyproject_api/_backend.py True setuptools.build_meta __legacy__
^CROOT: [70319] KeyboardInterrupt - teardown started
ROOT: interrupt tox environment: python
^C
```
Looking at the process tree, there's a defunct process under tox:
```
requests-mock bugfix-17 $ pstree -s tox
-+= 00001 root /sbin/launchd
\-+= 69773 jaraco /Applications/Hyper.app/Contents/MacOS/Hyper
\-+= 69782 jaraco /opt/homebrew/Cellar/python@3.11/3.11.1/Frameworks/Python.framework/Versions/3.11/Resources/Python.app/Contents/MacOS/Python /Users/jaraco/.local/bin/xonsh --login
\-+= 70319 jaraco /opt/homebrew/Cellar/python@3.11/3.11.1/Frameworks/Python.framework/Versions/3.11/Resources/Python.app/Contents/MacOS/Python /Users/jaraco/.local/bin/tox
\--- 70327 jaraco <defunct>
```
Invoking kill on 70319 gave the shell back.
I cleaned the space and re-ran tox and got a similar experience. Even without any Ctrl+C, I see the defunct process under tox. | 0easy
|
Title: I can't import the library into jupyter-lab
Body: 
| 0easy
|
Title: Docs: SerializationPluginProtocol example is cut off
Body: ### Summary
It seems the source has changed (https://docs.advanced-alchemy.litestar.dev/latest/reference/extensions/litestar/plugins/serialization.html#advanced_alchemy.extensions.litestar.plugins.serialization.SQLAlchemySerializationPlugin.supports_type) and the example is missing quite a bit:
<img width="915" alt="image" src="https://github.com/user-attachments/assets/af00a0fe-8c2a-45ed-ba80-1a5b6799ed40"> | 0easy
|
Title: Find and replace does not allow for replace with empty string
Body: ### Describe the bug
Find and replace - does not allow for replace with empty string
### Environment
"marimo": "0.10.15",
### Code to reproduce
_No response_ | 0easy
|
Title: Ulcer Index development version vs main version
Body: Ulcer Index
in development version: I am ignoring everget
```python
highest_close = close.rolling(length).max()
downside = scalar * (close - highest_close)
downside /= highest_close
d2 = downside * downside
_ui = d2.rolling(length).sum()
ui = np.sqrt(_ui / length)
```
In development version, sometime I am getting `RuntimeWarning: invalid value encountered in sqrt` after searching about it, I found there is a _ui value with negative and doing sqrt with negative result the error. I do not know why I am getting negative! I have revise the data of close. it looks normal and no negative in close
d2 is not negative, but _ui sometime is negative. to fix it in development, I have added _ui = _ui.abs()
In main version: I am ignoring everget
```python
highest_close = close.rolling(length).max()
downside = scalar* (close - highest_close)
downside /= highest_close
d2 = downside * downside
ui = (d2.rolling(length).sum() / length).apply(np.sqrt)
```
In the main version I do not get the error with same data.
Maybe apply function in pandas is ignoring error?
Is my data corropted or there is something in the calculation | 0easy
|
Title: Confidence intervals interpretation
Body: ```A 95% confidence interval does not mean that for a given realized interval there is a 95% probability that the population parameter lies within the interval (i.e., a 95% probability that the interval covers the population parameter).```
- [Wiki](https://en.wikipedia.org/wiki/Confidence_interval)
But on medium,
```In the end, we see that, with 95% probability, the average number of customer service calls from loyal customers lies between 1.4 and 1.49 while the churned clients called 2.06 through 2.40 times on average.``` | 0easy
|
Title: Unsharp Mask on RGB Image Results in Black Image When channel_axis Specified
Body: ### Description:
When applying `skimage.filters.unsharp_mask` to an `RGB` image and specifying the `channel_axis`, the resulting image turns black. However, if `channel_axis` is not specified, the issue does not occur. This issue appears to be specifically related to the handling of the `channel_axis` parameter.
### Way to reproduce:
```
from skimage import data, filters, color
import matplotlib.pyplot as plt
# Load an RGB image
rgb_image = data.astronaut()
# Apply unsharp mask
unsharp_rgb = filters.unsharp_mask(rgb_image, channel_axis=-1)
# Display the images
plt.subplot(1, 2, 1)
plt.imshow(rgb_image)
plt.title('Original RGB Image')
plt.subplot(1, 2, 2)
plt.imshow(unsharp_rgb)
plt.title('Unsharp Mask RGB Image')
plt.show()
```
### Version information:
```Shell
3.9.6 (default, Mar 10 2023, 20:16:38)
[Clang 14.0.3 (clang-1403.0.22.14.1)]
macOS-13.3.1-x86_64-i386-64bit
scikit-image version: 0.22.0
numpy version: 1.26.2
```
| 0easy
|
Title: [BUG] "Scroll for ... more charts" not working
Body: I'm using Jupyter Notebook 6.1.4 with Python 3.8.5 (default, Sep 3 2020, 21:29:08) [MSC v.1916 64 bit (AMD64)] and IPython 7.19.0.
As soon I display my dataframe and click on "Toggle Pandas/Lux" button, I get the lux charts as expected but cannot find a way to see all of them. On the bottom right corner it's written "Scroll for 12 more charts >>" but nothing happen if I scroll, expand/collapse the cell or click on the right arrows. | 0easy
|
Title: Remove bytes support from `robot.utils.normalize` function
Body: Our `normalize` utility is used for normalizing strings, but it also supports bytes. The reason for the bytes support is that in Python 2 `str` actually was bytes and we needed to handle that in addition to `unicode`. In Python 3 strings and bytes are separated properly, and this kind of string manipulation doesn't make sense with bytes. The bytes support makes the code a bit more complicated and removing it will also make the code a tiny bit faster. | 0easy
|
Title: Better error message when a Dash app is not found
Body: From issue #51 (and also #48 ) it looks like trying to use an unregistered `Dash` app does not give a helpful error message. Even if this is not the resolution for that specific issue, it would be good to have a more helpful explanation of the problem when there is an attempt to use an app that has not been registered.
| 0easy
|
Title: Eliminate red border on the screenshot using DWMWA_EXTENDED_FRAME_BOUNDS
Body: Reported on StackOverflow: https://stackoverflow.com/questions/54767813/pywinauto-capture-as-image-adds-unwanted-borders
Solution is also suggested there. | 0easy
|
Title: If test is skipped using `--skip` or `--skip-on-failure`, show used tags in test's message
Body: Using the command line option `--skip` <tag> prints `Test skipped using '--skip' command line option.` It would be nice to get not just that information but instead it will be shown due to which tag the test case has been skipped.
An example would be if adding the two command line options `--skip not_ready` and `--skip broken` the output shows something like `Test skipped due to skipped tag 'not_ready'`. | 0easy
|
Title: 'TfidfVectorizer' object has no attribute 'get_feature_names_out' error
Body: Hello, I just started using mljar. I am trying to build a model but it is giving me the 'TfidfVectorizer' object has no attribute 'get_feature_names_out' on error.md. I am using scikit-learn 1.0.0 version. How can I fix this? | 0easy
|
Title: "Minor" longitude and latitude gridlines
Body: Is there any straight forward way to stride through what tick labels are shown in cartopy? Or could this be added as a feature?
This would be the equivalent of having `xticks=[0,5,10,15,20]` + `xticklabels=['0', '', '10', '', '20']` for instance. The idea here is especially in longitude when labels get packed and overlapped at a reasonable font size for folks to see.
```python
import numpy as np
import proplot as plot
plot.rc['geogrid.linestyle'] = ':'
plot.rc['geogrid.linewidth'] = 2
plot.rc['geogrid.color'] = '#d3d3d3'
data = np.random.rand(90, 180)
lon = np.linspace(-179.5, 179.5, 180)
lat = np.linspace(-89.5, 89.5, 90)
f, ax = plot.subplots(width='12cm', aspect=4, proj='cyl', tight=True,)
p = ax.pcolormesh(lon, lat, data, )
ax.format(latlim=(20,50), lonlim=(-135, -105), land=True,
latlines=plot.arange(20,50,5), lonlines=plot.arange(-135,-105, 5),
labels=True)
ax.colorbar(p, loc='r')
```
<img width="364" alt="Screen Shot 2019-08-28 at 2 06 45 PM" src="https://user-images.githubusercontent.com/8881170/63892769-3f444d00-c99d-11e9-856f-f3e9a8cfcf70.png">
This case doesn't have any issues, but imagine if you wanted the label size to be 14, 16, or 18 point font for a poster. All the lon labels overlap. **Any way to show just 130W, 120W, 110W but maintain the grid structure?** I figure there's a means to do it with the formatter/ticker, but I can't figure it out.. | 0easy
|
Title: atr sometimes wrong
Body: Version: 0.3.02b
**Describe the bug**
Sometimes I get atr(14) even if I call atr(3)
<img width="507" alt="Screenshot_5" src="https://user-images.githubusercontent.com/22365509/124748196-1854eb80-df23-11eb-9d69-069009ac33df.png">
I guess the error is here in atr.py:
<img width="389" alt="Screenshot_6" src="https://user-images.githubusercontent.com/22365509/124748381-56520f80-df23-11eb-859f-26fa64cab170.png">
probably it has to be:
atr = ATR(high, low, close, length)
Thanks!
| 0easy
|
Title: Improving out of generate_dataset_by_class function to include naming convention
Body: **Is your feature request related to a problem? Please describe.**
Path is not changed based on the type of dataset presented
Improvement on the generation of the dataset in the space time analysis script.
Comment on PR:
https://github.com/capitalone/DataProfiler/pull/781#discussion_r1166029318
**Describe the outcome you'd like:**
There needs to be a naming convention for the file in the function `generate_dataset_by_class`
Possible solution:
dataset_seed_{}length{}cols{}.csv
**Additional context:**
| 0easy
|
Title: (🐞) `RecursionError` in `==` when object has a reference to itself
Body: ### Initial Checks
- [X] I confirm that I'm using Pydantic V2
### Description
```
[Previous line repeated 370 more times]
File "C:\Users\AMONGUS\projects\test\.venv\Lib\site-packages\pydantic\main.py", line 961, in __eq__
if isinstance(other, BaseModel):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\AMONGUS\projects\test\.venv\Lib\site-packages\pydantic\_internal\_model_construction.py", line 273, in __instancecheck__
return hasattr(instance, '__pydantic_validator__') and super().__instancecheck__(instance)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen abc>", line 119, in __instancecheck__
RecursionError: maximum recursion depth exceeded while calling a Python object
```
### Example Code
```Python
from __future__ import annotations
from pydantic import BaseModel
class A(BaseModel):
a: A = None
a = A()
a.a = a
a2 = A()
a2.a = a2
a == a2
```
### Python, Pydantic & OS Version
```Text
pydantic version: 2.9.2
pydantic-core version: 2.23.4
pydantic-core build: profile=release pgo=false
install path: C:\Users\AMONGUS\projects\test\.venv\Lib\site-packages\pydantic
python version: 3.12.2 (tags/v3.12.2:6abddd9, Feb 6 2024, 21:26:36) [MSC v.1937 64 bit (AMD64)]
platform: Windows-10-10.0.19045-SP0
related packages: typing_extensions-4.12.2
commit: unknown
```
| 0easy
|
Title: CI: more extensive tests that optional dependencies are optional
Body: Currently we only test optional packages are not needed to `import cleanlab` on CI server before it has installed requirements-dev.txt: https://github.com/cleanlab/cleanlab/blob/10abd7d1910b7db237203b0533e07d9f8dd91fe0/.github/workflows/ci.yml#L30
Ideally, there would be more involved testing at this point (before the optional packages like tensorflow,pytorch,etc have been installed), eg. some subset of the fast-running unit tests that are known to not require these optional packages. | 0easy
|
Title: 🚸Actions button with keyboards
Body: ## Feature Request
Refacto to be able to reach actions elements from the keyboard.
## See
This PR in People works on this problem as well.
https://github.com/numerique-gouv/people/pull/298 | 0easy
|
Title: [Feature] use pytest for sgl-kernel
Body: ### Checklist
- [ ] 1. If the issue you raised is not a feature but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [ ] 2. Please use English, otherwise it will be closed.
### Motivation
https://github.com/sgl-project/sglang/tree/main/sgl-kernel/tests
Some tests use unittest, we want to switch them to pytest.
### Related resources
_No response_ | 0easy
|
Title: add feature importance heatmap for all models
Body: | 0easy
|
Title: SyntaxError in examples
Body: ### Specifications
* Client Version: 1.35
* python version 3.10.8
* mypy version 0.991
### Code sample to reproduce problem
Code example from **README.rst**
```py
class BatchingCallback(object):
def success(self, conf: (str, str, str), data: str):
print(f"Written batch: {conf}, data: {data}")
```
### Expected behavior
Code from Readme can be copy-pasted into source file and not generate syntax errors from either python nor mypy.
### Actual behavior
Running mypy gives error:
```
example.py: note: In member "success" of class "BatchingCallback":
example.py:75:27: error: Syntax error in type annotation [syntax]
example.py:75:27: note: Suggestion: Use Tuple[T1, ..., Tn] instead of (T1, ..., Tn)
```
### Additional info
Using the suggestion from mypy error message should be sufficent.
(Would be great if `tuple` alias could be used instead of `typing.Tuple` to save on import verbosity, but that requires 3.9+, this library is currently targeting 3.7+.) | 0easy
|
Title: [Performance, tech debt] Merge add_weighted
Body: We have
```python
@clipped
def add_weighted(img1: np.ndarray, alpha: float, img2: np.ndarray, beta: float) -> np.ndarray:
return img1.astype(float) * alpha + img2.astype(float) * beta
```
and
```python
@clipped
@preserve_shape
def mix_arrays(array1: np.ndarray, array2: np.ndarray, mix_coef: float) -> np.ndarray:
array2 = array2.reshape(array1.shape).astype(array1.dtype)
return cv2.addWeighted(array1, mix_coef, array2, 1 - mix_coef, 0)
```
Which are similar, and first one does not even use cv2 optimizations.
1. We need to update `add_weighted` with cv2 and proper type casting (we should not cast to float without good reason)
2. We need to remove `mix_arrays` and `call add_weighted` instead.
| 0easy
|
Title: [Tensorflow] Plots appear smoother inconsistent with MXNet/PyTorch
Body: Although the results look nice and ideal in all TensorFlow plots and are consistent across all frameworks, there is a small difference (more of a consistency issue). The result training loss/accuracy plots look like they are sampling on a lesser number of points. It looks more straight and smooth and less wiggly as compared to PyTorch or MXNet.
It can be clearly seen in chapter 6([CNN Lenet](https://d2l.ai/chapter_convolutional-neural-networks/lenet.html#training)), 7([Modern CNN](https://d2l.ai/chapter_convolutional-modern/alexnet.html#training)), 11 ([Optimization LR Scheduler](https://d2l.ai/chapter_optimization/lr-scheduler.html#warmup)) plots. It is due to how the class `TrainCallback` is implemented **on epoch** instead of **on batch** [here](https://d2l.ai/chapter_convolutional-neural-networks/lenet.html#training).
This consistency issue plagues many of the TensorFlow plots across the book. PRs are welcome, this sounds like a good issue for beginners who would like to contribute to d2l. Feel free to ask any questions/doubts.
cc @astonzhang @terrytangyuan | 0easy
|
Title: Add a visual indication to Favicon/Tab
Body: 
This is what we get when a monitor is down in a full tab. Howhever, if the tab is pinned on browser, we have no visual indication.
[Uptime-Kuma](https://github.com/louislam/uptime-kuma) project has implemented this, for reference _(here both tabs pinned, side by side, where both have 2 monitors down)_:

| 0easy
|
Title: Add e2e test for `tune` in the `katib_client` module
Body: ### What you would like to be added?
We need to add e2e test for the `tune` function in the `katib_client` module.
### Why is this needed?
**References**
- e2e test for `create_experiment` in the `katib_client` module: https://github.com/kubeflow/katib/blob/master/test/e2e/v1beta1/scripts/gh-actions/run-e2e-experiment.py#L172
- e2e test for `training-operator` SDK: https://github.com/kubeflow/training-operator/blob/master/sdk/python/test/e2e/test_e2e_pytorchjob.py#L174
**Source**
- https://github.com/kubeflow/katib/pull/2325#issuecomment-2186646927
**Related issues**
- [Add unit tests to the Katib SDK #2184](https://github.com/kubeflow/katib/issues/2184)
- [Add unit test for create_experiment in the katib_client module #2325](https://github.com/kubeflow/katib/pull/2325)
- [Add unit test for tune in the katib_client module #2372](https://github.com/kubeflow/katib/issues/2372)
### Love this feature?
Give it a 👍 We prioritize the features with most 👍 | 0easy
|
Title: Libdoc crashes if it does not detect documentation format
Body: If documentation format isn't detected automatically based on the output file extension or given explicitly using `--format`, Libdoc execution fails with `AttributeError`:
```
$ libdoc BuiltIn BuiltIn
Unexpected error: AttributeError: 'NoneType' object has no attribute 'upper'
Traceback (most recent call last):
File "/home/peke/Devel/robotframework/src/robot/utils/application.py", line 81, in _execute
rc = self.main(arguments, **options)
File "src/robot/libdoc.py", line 197, in main
libdoc.save(output, format, self._validate_theme(theme, format))
File "/home/peke/Devel/robotframework/src/robot/libdocpkg/model.py", line 93, in save
with LibdocOutput(output, format) as outfile:
File "/home/peke/Devel/robotframework/src/robot/libdocpkg/output.py", line 27, in __init__
self._format = format.upper()
AttributeError: 'NoneType' object has no attribute 'upper'
```
This is an error situation, but it should be reported correctly. | 0easy
|
Title: DOC: spatial index query docs do not include all predicates
Body: Our docstring for query and query bulk contains only a subset of predicates pygeos supports. See https://geopandas.org/en/latest/docs/reference/api/geopandas.sindex.SpatialIndex.query_bulk.html#geopandas-sindex-spatialindex-query-bulk does not list all supported predicates. Compare to https://pygeos.readthedocs.io/en/stable/strtree.html#pygeos.strtree.STRtree.query_bulk | 0easy
|
Title: Place an "edit" button on the detail page
Body: ### Checklist
- [X] There are no similar issues or pull requests for this yet.
### Is your feature related to a problem? Please describe.
It is useful if the detail page has an edit button.
### Describe the solution you would like.
I think the edit button for each record in the list view can be removed if the detail page has it. The small icon in the list is a bit tiny and prone to misclick.
### Describe alternatives you considered
_No response_
### Additional context
_No response_ | 0easy
|
Title: [tech debt] Cleanup Functions
Body: in `crops.functions`
`crop` and `random_crop` do exactly the same. Should remove one of them. | 0easy
|
Title: Problem with current session API + mfa + social auth
Body: Hello, I'm seeing unexpected behavior on the current session API. I made a small reproduction repo [here](https://gitlab.com/bufke/allauth-headless-social-mfa).
It looks like under some circumstances, the current session API is not returning mfa_authenticate after the user authenticates with social auth and needs to proceed to MFA auth. This renders the user unable to login. The same user account works fine, completing both social and MFA auth, using the regular django templates at /accounts/.
The current session api works fine on https://react.demo.allauth.org and when I run a JavaScript dev server that proxies API requests to the backend. I suspect the issue has something to do with stash/unstash_login in [account/utils.py](https://github.com/pennersr/django-allauth/blob/cfd14550ddd4115b4b50b1aca54719cbcbf55f9b/allauth/account/utils.py#L169).
When it works (in my js dev server), I see unstash gets a <allauth.account.models.Login object at 0x759959adb770> object. But does not when it isn't working (in the pure django example).
I'll continue to debug but let me know if you have any ideas. I think I've demonstrated the problem in a very trivial JS example above. | 0easy
|
Title: DBAPIClient must create parent folders when initializing sqlite3 connection
Body: Currently this fails:
```python
from ploomber.clients import DBAPIClient
import sqlite3
client = DBAPIClient(sqlite3.connect, dict(database='non-existing-folder/my.db'))
```
Error:
```pytb
OperationalError: unable to open database file
```
This happens when the parents of the db (non-existing-folder) on this case do no exist, it'd better to create them. But this only applies to sqlite, not to any other db.
| 0easy
|
Title: Add Checkboxes
Body: Rio already comes with switches, but sometimes checkboxes just look nicer. Considering how simple they are to add, there's no excuse :) | 0easy
|
Title: question about travis CI tests
Body: it seems like the same tests (for the same code) are run multiple times. are different things being tested for, or is this a potential inefficiency that could be corrected? | 0easy
|
Title: add classes_to_plot option to plot_cumulative_gain
Body: I think it could be useful, when one wants to plot only e.g. class 1, to have an option to produce consistent plots for both plot_cumulative_gain and plot_roc
At the moment, instead, only plot_roc supports such option.
Thanks a lot | 0easy
|
Title: [Question] Can `utils.play.Play` be used for playing `Blackjack`?
Body: ### Question
I would like to use [`utils.play.Play`](https://gymnasium.farama.org/api/utils/#gymnasium.utils.play.play) to play `Blackjack`, however I failed. `utils.play.Play` does not wait for keypresses and proceed with the next step using the `noop` key (default 0) if the user does not provide any input.
Can `utils.play.Play` be used for playing `Blackjack`? Or: Is `utils.play.Play` primarily implemented for high fps games like `CarRacing`?
If yes, I can happily open an enhancement request to note that in the docs. | 0easy
|
Title: Series.droplevel
Body: Implement `Series.droplevel`.
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.droplevel.html | 0easy
|
Title: Comparison of Different Fine-Tuning Techniques for Conversational AI
Body: ### Feature request
It would be incredibly helpful to have a clear comparison or support for various fine-tuning techniques specifically for conversational AI. This feature could include insights into their strengths, limitations, and ideal use cases, helping practitioners choose the right approach for their needs.
Here’s a list of techniques to consider:
LoRa
AdaLoRa
BONE
VeRa
XLora
LN Tuning
VbLora
HRA (Hyperparameter Regularization Adapter)
IA3 (Input-Aware Adapter)
Llama Adapter
CPT (Conditional Prompt Tuning)etc
### Motivation
With the growing number of fine-tuning techniques for conversational AI, it can be challenging to identify the most suitable approach for specific use cases. A comprehensive comparison of these techniques—highlighting their strengths, limitations, and ideal scenarios—would save time, reduce trial-and-error, and empower users to make informed decisions. This feature would bridge the gap between research and practical application, enabling more effective model customization and deployment.
### Your contribution
I’d be happy to collaborate on this! While I might not have a complete solution right now, I’m willing to contribute by gathering resources, reviewing papers, or helping organize comparisons. If others are interested in teaming up, we could work together on a PR to make this feature happen. Let’s connect and brainstorm how we can tackle this effectively! | 0easy
|
Title: DOC: translate `Contributing to the code base` to Chinese
Body: https://doc.xorbits.io/en/latest/development/contributing_codebase.html | 0easy
|
Title: In docs use Interspinx to the Django project
Body: and move the RST-sources into it's own folder. | 0easy
|
Title: Elapsed time is ignored when parsing output.xml if start time is not set
Body: When writing execution times output.xml, we always include the elapsed time and also include the start time if it is set. Typically the start time is available, but it is cleared, for example, when merging results.
The code parsing execution times from output.xml tries to detect is the format new (RF >= 7.0) or old (RF < 7.0) by looking is the `start` attribute set. In the old format there's `starttime` instead of `start`, but `start` is also missing if the suite didn't have the start time set. In such cases the elapsed time is fully ignored. This is easy to fix by looking at the `elapsed` attribute instead of `start`.
If elapsed time is not parsed, it's calculated automatically based on child element elapsed times. That is typically the right time, so it's unlikely that anyone is actually affected by this bug. I noticed it when creating a test for #5058 and it also happens to work as a workaround for that issue. | 0easy
|
Title: Hay Update training sample plots to readme
Body: 🙌🙌🙌🙌🙌 | 0easy
|
Title: Control continue-on-failure mode by using recursive and non-recursive tags together
Body: By default, test case teardowns implicitly run on `robot:recursive-continue-on-failure` mode. As I rely on keywords that are not only used in teardowns, I need `robot:continue-on-failure` mode. But setting the tag on teardown-keyword does not have any effect.
Example:
```robot
*** Test Cases ***
Demo Continue On Failure In Teardown
Log Hello World
[Teardown] Cleanup Test Case
*** Keywords ***
Cleanup Test Case
[Tags] robot:continue-on-failure
Step 1
Step 2
Step 1
Log All good.
Step 2
Fail When this fails
Fail Then this 2nd FAIL must not be executed!!!
```
I do want the teardown keyword to continue on failure, just not recursively. The only workaraound is to add `robot:stop-on-failure` to all keywords.
Easiest way would be, if `robot:continue-on-failure` would override the default behaviour of teardown keywords. | 0easy
|
Title: Drop Python 3.8 and Python 3.9 support
Body: At the moment the minimum Python version we support is Python 3.8. It has already reached its EOL in October 2024, but it is still pretty widely used. It is also the default Python in Ubuntu 20.04 LTS that is supported until April 2025. Python 3.9 will reach its EOL in October 2025. I'm not aware of it being used by any Linux LTS release, so supporting it much longer than that doesn't have too much benefits.
We don't currently have concrete plans for RF 8.0 development, but it being released in late 2025 or early 2026 ought to be a good estimate. We can thus can safely drop both Python 3.8 and Python 3.9 support in that release.
The main benefit of having Python 3.10 as the minimum version is naturally the possibility to take advantage of new features it contains. There are enhancements especially to typing that allow, for example, writing type hints like `list[int]` and `int | float` instead of `'list[int]'` and `'int | float'`. | 0easy
|
Title: [DOCS]: How do you set criteria for "Additional Questions"?
Body: ### Affected documentation section
_No response_
### Documentation improvement description
How do you set criteria for "Additional Questions"?
**For example.**
How many years of work experience do you have with Node.js? **Answer** `10`
### Why is this change necessary?
_No response_
### Additional context
_No response_ | 0easy
|
Title: Refactor the code to make sure os path related logic is OS agnostic
Body: Refactor the code to make the code expecting Unix/Linux paths OS-independent.
Test and ensure that the the entire application with all its functionality (including Scan engine, API and WebUI) works equally fine on both Windows and Unix-like operating systems | 0easy
|
Title: Better traceback messages (working with very dirty csv files)
Body: I'm trying to build a ModelResource to describe/normalize a very messy data export. It would be very useful if the `ModelResource().import_data()` function could print out the offending row when `raise_error=True`. I'm guessing the way to do that is to overwrite the `import_row()` function but I'm not sure how to do that. Has anyone succeeded in writing a more verbose error reporting feature for the command-line?
Basically, seeing this error:
```
File "/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py", line 235, in format_number
value = value.quantize(decimal.Decimal(1).scaleb(-decimal_places), context=context)
decimal.InvalidOperation: [<class 'decimal.InvalidOperation'>]
```
isn't helping me trace down which of the DecimalField columns, or what row is gumming up the process. It'd be nice if it gave either the row id, or just printed out the entire row. | 0easy
|
Title: provide a basic GUI for users who prefer to uses GUI
Body:
### Description
Users should have the flexibility of using a simple GUI if they don't want to use the CLI. A simple GUI can be made with Tkinter maybe?
It's important to not add any dependency for this. It would be better to implement this using an existing python module. | 0easy
|
Title: Better error message if `nodeenv` fails
Body: ## Problem
See issues ran into here: https://github.com/RobertCraigie/prisma-client-py/issues/784.
In the future we may want to move away from nodeenv entirely because of issues like this, but for now an improved error message that points to `nodejs-bin` is good enough.
| 0easy
|
Title: [INFRA] Test import as part of ci
Body: **Is your feature request related to a problem? Please describe.**
Every month or two we get a failed release b/c unit tests pass, but something wrong with the exports means post-install import actually fails. E.g., https://github.com/graphistry/pygraphistry/pull/550 .
**Describe the solution you'd like**
As part of `ci` + `publish`, we should test a flow of `pip install ...` => `import graphistry; print(graphistry.__version__)`
* ci: we can likely do a `pip install -i http://...` using the git checkout hash => github url
* publish: after the `test.pypi.org` publish, we can do a `pip install -i http://test.pypi.org/...` of the specific version
| 0easy
|
Title: Add module docstrings
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
We could improve the readability / experience reading through the library source code if there were module level docstrings.
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
All python modules should include a docstring explaining what the purpose of the module is:
```py
"""
prisma.client
~~~~~~~~~~~~~
Main interface for the Client API - handles connecting, disconnecting
and querying the underlying Prisma Engine.
:auto-generated:
"""
```
## For contributors
This is a fairly simple change but it may not be exactly clear what each module does - if you need any help feel free to ask a question on the [discord](https://discord.gg/HpFaJbepBH).
If you update auto-generated modules make sure you [update the snapshots](https://prisma-client-py.readthedocs.io/en/stable/contributing/contributing/#snapshot-tests)! | 0easy
|
Title: [ENH] Add a reset_index boolean kwarg to shuffle()
Body: # Brief Description
I would like to propose to add a `reset_index` boolean kwarg to the `.shuffle()` function.
Doing so would help save a line of code. After shuffling a dataframe, the index remains the same. Sometimes, users may want to preserve the index, but my current opinionated opinion is that preservation of the index might not be the 90% use case. Instead, we want to shuffle the rows without keeping each row attached to its original index, i.e. we would want to reset_index.
# Example API
We would just add the reset_index kwarg to shuffle.
```python
df.shuffle(reset_index=True)
```
Underneath the hood, it would simply call .reset_index(drop=True):
```python
def shuffle(df, ..., reset_index=True):
...
...
if reset_index:
df = df.reset_index(drop=True)
return df
```
| 0easy
|
Title: [FEA] replace flake8 with ruff
Body: **Is your feature request related to a problem? Please describe.**
Less of an impact than replacing mypy... but ruff is pleasantly fast, so let's replace flake8 with it!
| 0easy
|
Title: Weird issue when using `use_enum_values` in Pydantic v2
Body: ### Initial Checks
- [X] I confirm that I'm using Pydantic V2
### Description
I encountered an extremely weird issue when using `use_enum_values` in a pydantic model. It works as expected in most of the cases but when I add a field which is a list of another model and if that other model multiple fields which has the same enum type as the original model, the enum value fails to serialize to string when using `model_dump`.
### Example Code
```Python
from enum import Enum
from typing import Optional
from pydantic import BaseModel, ConfigDict, conlist
class Currency(Enum):
USD = "USD"
CAD = "CAD"
class MyBaseModel(BaseModel):
model_config = ConfigDict(
use_enum_values=True,
)
class TestModel(MyBaseModel):
currency: Currency
amount: int
class PreferredCurrencies(MyBaseModel):
currency_one: Optional[Currency] = None
currency_two: Optional[Currency] = None
# Model similar to TestModel with addition of an extra field 'preferred_currencies'
class TestModel2(MyBaseModel):
preferred_currencies: Optional[conlist(item_type=PreferredCurrencies)] = []
currency: Currency
amount: int
m1 = TestModel(currency=Currency.USD, amount=100)
print(m1.model_dump(exclude_none=True)) # Output: {'currency': 'USD', 'amount': 100}
m2 = TestModel2(
currency=Currency.USD,
amount=100,
)
print(
m2.model_dump(exclude_none=True)
) # Output: {'preferred_currencies': [], 'currency': <Currency.USD: 'USD'>, 'amount': 100}
```
### Python, Pydantic & OS Version
```Text
pydantic version: 2.9.2
pydantic-core version: 2.23.4
pydantic-core build: profile=release pgo=false
install path: /Users/pranab/.pyenv/versions/3.12.5/envs/my-env/lib/python3.12/site-packages/pydantic
python version: 3.12.5 (main, Sep 20 2024, 23:47:29) [Clang 15.0.0 (clang-1500.3.9.4)]
platform: macOS-15.0-arm64-arm-64bit
related packages: typing_extensions-4.12.2
commit: unknown
```
| 0easy
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.