text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: Silence the `filelock` logger
Body: After we started using `tldextract` we sometimes get log messages from `filelock` with the DEBUG level, it makes sense to silence them like we do for some other libraries in https://github.com/scrapy/scrapy/blob/fe60c1224e39aa3d85b20afd54566f135d9de085/scrapy/utils/log.py#L45-L59 | 0easy
|
Title: Minor typo in the 3rd Assignment
Body: > In our case there's only **_ine_** feature so ... | 0easy
|
Title: Minor class in multi-class classification with too few samples to stratify
Body: In the case of too few samples to perform stratification there is an error thrown:
```py
The least populated class in y has only 2 members, which is less than n_splits=5.
```
Maybe we can detect such situations and upsample minor classes? For sure it is related to https://github.com/mljar/mljar-supervised/issues/157 However, this issue requires rather a quick fix and #157 requires a larger treatment of unbalanced datasets. | 0easy
|
Title: `test_import.py` module is large and can be broken up into smaller modules
Body: `tests/admin_integration/test_import.py` is now over 1k lines and should be broken up into smaller test modules.
Also
- look at adding helpers to remove code duplication
- Refer to URLs using constants in `admin_integration/mixins.py` rather than magic strings
| 0easy
|
Title: [Data Issue]: Canary Islands no data since 23.01.2025
Body: ### When did this happen?
no data en canary Islands since
2025-01.23 ca 10:00
Example: El Hierro

### What zones are affected?
El Hierro, Gomera, La Palma, Tenerriffe, Gran Canaria, Fuerteventura/Lancarote
### What is the problem?
no data at all en canary Islands since
2025-01.23 ca 10:00 | 0easy
|
Title: SNlM0e : Refresh automatically
Body: Hi,
Sometimes, if you make several requests by using the session cookie, the SNlM0e is not longer displayed if you make an HTTP request to https://bard.google.com with the session cookie.
Then you need to manually to bard.google.com and refresh the cookie.
Has anyone found a solution to automatically fix the non displayable SNlM0e in the content of bard.google.com without a human action ? (I already use a proxy)
| 0easy
|
Title: Move StrawberryConfig to a typed dict
Body: as suggested in https://github.com/strawberry-graphql/strawberry/pull/2698#issuecomment-1631030171
we should change strawberry's schema config to be a type dictionary, I think this will make the dx a bit nicer, since you can do this:
```
import strawberry
...
schema = strawberry.Schema(..., config={"auto_camel_case": False})
```
which is a reduction of one import (and bit less characters) π
The main disadvantage for users it that they might not get autocompletion if their editor is not setup well.
The main disadvantage for us is that it might get a bit cumbersome to get values (or fallback to default values).
In any case if we do this, we should deprecate the old config, and write a codemod, which should be easy to do π | 0easy
|
Title: β¨ Add a PDF reader in docs
Body: ## Feature Request
We can now integrate PDF on our docs, it could be nice to be able to see them directly from the editor.
An example is available here:
https://www.blocknotejs.org/examples/custom-schema/pdf-file-block
See:
https://github.com/TypeCellOS/BlockNote/tree/6aec6e2940d5732a5d6003f821faf218fdeed3eb/examples/06-custom-schema/04-pdf-file-block | 0easy
|
Title: UserWarning: Distributing <class 'NoneType'> object. This may take some time.
Body: I think that we should not give this warning if the data that will be placed in the distribution storage is small in size. | 0easy
|
Title: Evaluating and saving checkpoints within an epoch
Body: **Is your feature request related to a problem? Please describe.**
Right now, allennlp currently evaluates / saves checkpoints at the end of every epoch. However, it'd be nice if there was some way of evaluating within an epoch. For example, say an epoch is 10,000 steps, and I want to evaluate / save a checkpoint every 500 steps.
This is especially important when fine-tuning pre-trained language models. Often, the optimal checkpoint does not occur at the end of an epoch, and models can quickly overfit. For example, the huggingface default scripts save checkpoints every 500 steps. it'd be nice to have some sort of similar option for allennlp.
**Describe the solution you'd like**
perhaps the checkpointing side of this could be controlled by the checkpointer, similar to how `keep_serialized_model_every_num_seconds` exists. Perhaps `keep_serialized_model_every_num_steps`. However, maybe there needs to be analogous argument in the `Trainer` for evaluation. | 0easy
|
Title: [docs] Misleading reference to `ENVDIR` in the description of the `tox devenv` command
Body: This paragraph https://tox.wiki/en/4.21.2/cli_interface.html#tox-devenv-(d) references a mysterious `ENVDIR` which seems to be copy-pasted from https://tox.wiki/en/4.21.2/cli_interface.html#tox-legacy-(le). The command example does not have a `ENVDIR` placeholder, but has a `[path]`. Is that what was supposed to be in the description? | 0easy
|
Title: [Proposal] Allow frame stack for stacks of size 1
Body: ### Proposal
Hi,
I propose that the framework is updated to allow frame stacks of size 1. This could be helpful for pipelines which might use frame stacks in some settings and omit them in others. By allowing frame stacks of size 1 the frame stack wrapper can be left in the code and automatically add the extra dimension for frame stacking.
To implement this, simply remove these lines:
https://github.com/Farama-Foundation/Gymnasium/blob/a4f1a93dc19261049b352d45427c260ecab0f0a7/gymnasium/wrappers/stateful_observation.py#L377-L380
Previous versions of the gym wrapper supported this: https://github.com/openai/gym/blob/master/gym/wrappers/frame_stack.py
### Motivation
As described above, this will enable cleaner code for several applications without causing any issues to existing ones. Also, this is consistent with previous versions of the gym frame stack wrapper.
### Pitch
Just remove these lines or maybe assert stack-size > 0 instead
https://github.com/Farama-Foundation/Gymnasium/blob/main/gymnasium/wrappers/stateful_observation.py#L377-L380
### Alternatives
_No response_
### Additional context
_No response_
### Checklist
- [X] I have checked that there is no similar [issue](https://github.com/Farama-Foundation/Gymnasium/issues) in the repo
| 0easy
|
Title: Date Visualization Bug in Flights Dataset
Body: X axis for temporal attribute looks off for the flights dataset.

| 0easy
|
Title: Cleaning non-standard unicode characters
Body: There is a space between "in %" and it does not change to underscore like "in_%" because this is a non-breaking Latin1 (ISO 8859-1) space (xa0).
```
import pandas as pd
table_GDP = pd.read_html('https://en.wikipedia.org/wiki/Economy_of_the_United_States', match='Nominal GDP')
df = table_GDP[0]
import janitor
df=df.clean_names(strip_underscores=True, case_type='lower')
df.columns
```
> Index(['year', 'nominal_gdp_in_bil_us_dollar', 'gdp_per_capita_in_us_dollar',
> 'gdp_growth_real', 'inflation_rate_in_percent',
> 'unemployment_in_percent', 'budget_balance_in %_of_gdp_[107]',
> 'government_debt_held_by_public_in %_of_gdp_[108]',
> 'current_account_balance_in %_of_gdp'],
> dtype='object')
We need to add normalize class to correct this and similar issues faced by the data coming from non-standard data sources.
```
from unicodedata import normalize
def clean_normalize_whitespace(x):
if isinstance(x, str):
return normalize('NFKC', x).strip()
else:
return x
```
Reference:
https://unicode.org/reports/tr15/#Norm_Forms | 0easy
|
Title: Missing image on Lesson 3 notebook
Body: Hey,
Image _credit_scoring_toy_tree_english.png_ is missing on the topic3_decision_trees_kNN notebook. | 0easy
|
Title: Centralize python setup guide
Body: Through out the docs and guides we mention how to set up a python project using `pyenv`, `python`, `venv` and `pip`, it would be nice to centralize this and reference a common resource.
### The page URLs
* https://slack.dev/bolt-python/
## Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/bolt-python/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| 0easy
|
Title: Enhancement: Support custom response for batch actions
Body: Enhance the @action decorator to support custom responses, such as redirects or rendering HTML pages, instead of just returning strings (related to #205)
Todo:
- [x] Add a custom_response flag to the @action decorator function (default: False).
- [x] Update submitAction() in [list.js](https://github.com/jowilf/starlette-admin/blob/main/starlette_admin/statics/js/list.js) to handle custom responses.
- [x] Modify handle_action in the base admin class to support custom responses.
- [x] Ensure backward compatibility.
- [x] Include an example in [custom_actions example](https://github.com/jowilf/starlette-admin/tree/main/examples/custom_actions)
- [x] Add tests.
- [x] Update the documentation | 0easy
|
Title: [ENH]: Allow list of padding values for bar_label
Body: ### Problem
Sometimes when adding labels to bar plots the labels will overlap another item and need padding. In this example the label "300" (column D) is too low and overlaps other text. Ideally I could just provide padding as a list to `bar_label` so that I can target this specific label (instead of the current all-or-nothing float value):

With manually added labels I can add padding to just the desired label, but it would be nice as a built-in option:

### Proposed solution
Add logic to check if the `padding` parameter of `bar_label` is an appropriate-length iterable and apply padding accordingly. | 0easy
|
Title: Visualization breaks when intent is not part of resulting table
Body: When displaying the visualization, it is possible that the intent is not present in the resulting data table. For example in the case of the index, the intent is propagated over to the resulting table, but the attribute `Attrition` is no longer in the table. When the intent is not present in the dataframe, we should handle the error by falling back to the default visualizations.
```python
df = pd.read_csv("https://raw.githubusercontent.com/lux-org/lux-datasets/master/data/employee.csv")
df.intent = ["Attrition"]
df.groupby("BusinessTravel").mean()
```

| 0easy
|
Title: Typescript SDK: How to get workflow Id while creating via putWorkflow
Body: I'm creating workflow via `admin.putWorkflow` but it does not return anything.
So, in this case, how I can get a workflow Id in case I have to delete it later?
I couldn't find anything in the sdk or docs.
SDK: Typescript
Thanks. | 0easy
|
Title: add a unit test that `datalab.issues` solely contains numeric and boolean values
Body: Can go in this file:
https://github.com/cleanlab/cleanlab/blob/master/tests/datalab/test_datalab.py
Also add a second unit test that `datalab.issue_summary` solely contains numeric values. | 0easy
|
Title: Contribution: tree visualisation for lightgbm models
Body: Recently `dtreeviz` has added support for lightgbm models: https://github.com/parrt/dtreeviz/
So it would be cool to add the same tree visualization in explainerdashboard that already exist for RandomForest, ExtraTrees and XGBoost models.
If someone wants to pitch in to help extract individual predictions of decision trees inside lightgbm boosters and then get them in shape to be used by the dashboard that would be great! | 0easy
|
Title: Radio Group
Body: https://ant.design/components/radio/ | 0easy
|
Title: Support for python3.12
Body: Hello there, is it possible that `gensim` can support python3.12? | 0easy
|
Title: fix js when right bar does not appear
Body: there is some JS code that highlights the currently active section in the right sidebar. See this: https://docs.ploomber.io/en/latest/user-guide/cli.html
However, not all sections in the docs show the right sidebar, in such case, the console keeps throwing an error:

| 0easy
|
Title: Static Type Hinting: `dataprofiler/reports/utils.py`
Body: **Is your feature request related to a problem? Please describe.**
Improve the project in the area of
- Make collaboration and code contribution easier
- Getting started with the code when a contributor is new to the code base
**Describe the outcome you'd like:**
Utilizing types from [mypy](https://mypy.readthedocs.io/en/stable/cheat_sheet_py3.html) documentation for python3, implement type hinting in `dataprofiler/reports/utils.py`.
**Additional context:**
- [mypy](https://mypy.readthedocs.io/en/stable/cheat_sheet_py3.html) documentation
- [Why use python types?](https://medium.com/vacatronics/why-you-should-consider-python-type-hints-770e5cb1570f)
| 0easy
|
Title: Collaboration Platform Activty metric API
Body: The canonical definition is here: https://chaoss.community/?p=3484v | 0easy
|
Title: allow One hot encoder to work with nan
Body: At the moment the one hot encoder does not support missing values. It will raise an error. We want to make this transformer more versatile.
OneHotEncoder could have 3 options:
- [ ] missing_values="raise" ==> current functionality
- [ ] missing_values="encode" ==> treat na as an additional category
- [ ] missing_values="ignore" ==> ignore, but only if encoding in k-1, otherwise they will be merged with the dropped category
| 0easy
|
Title: tests isolation
Body: We need to make sure our tests are isolated and we have a fixture for a local directory by default on each test. Currently some tests produce residues that breaks the isolation of each test and can affect other tests. Also we have a fixture we're passing manually per test signature, called tmp_directory. | 0easy
|
Title: Update the pytorch interface to use the numpy interface
Body: With version 1.8, pytorch now has a numpy-like interface. Specifically, qr and svd definitely need to be moved over.
| 0easy
|
Title: Add public argument conversion API for libraries and other tools
Body: Hi @pekkaklarck ,
We are using `TypeConverter` class in Robot Framework Browser in multiple places, to convert arguments of `*varagrs` when calling internal keywords.
Unfortunately this API is no longer be able to digest real `type` values.
```python
from robot.running.arguments.typeconverters import TypeConverter
TypeConverter.converter_for(int)
```
We need βdynamicβ type conversion because of many reasons.
With dynamic i mean we do Robot type conversion inside Browser lib when Robot is not able to do.
1. We do have situations where we deprecate positional args in favour of named only ones. In these cases we introduce a `*varags` argument that collects all old positional args and assigns them in the old order to the variables inside. Therefore there is no type hinting in the arguments. but we have a Catalog of the original types which can be used to convert.
2. We do have keywords like `Promise To` and `Wait For Condition` which calls our keywords internally and has `*args` to take the arguments. These two keywords iterate over the called keywords interface and try to convert the arguments according to their type. Also in this case we want/need to support dynamic argument conversion.
For us it is not really important, if exactly this function `converter_for()` will stay compatible or if a new function will be implemented to get to get the proper converter for a Python type.
However it would be more convenient for our users to stay compatible, so they do not necessarily need to update Browser, when updating RF.
We do currently:
```python
converter = TypeConverter.converter_for(argument_type)
if converter is not None:
params[argument_name] = converter.convert(
argument_name, params.get(argument_name)
)
```
A Public API for this would be nice too.
Or maybe even just a convenient function for Converting value to type as a static method.
Something like `TypeConverters.convert_to_type(name: str, value: Any, destination_type: type)`
Cheers | 0easy
|
Title: State filter doesn't work with inline keyboard query callback in groups
Body: ## Context
Please provide any relevant information about your setup. This is important in case the issue is not reproducible except for under certain conditions.
* Operating System: Ubuntu 20.10
* Python Version: 3.8.6
* aiogram version: 2.11.2
* aiohttp version: 3.7.3
* uvloop version (if installed):
## Expected Behavior
State filter works for inline keyboard queries in group chats.
## Current Behavior
Setting callback for message with state machine like this works: ``@dp.message_handler(state=Form.date)``
Setting callback for queries from inline keyboard doesn't work: ``@dp.callback_query_handler(state=Form.date)`` This handler is completely ignored.
This works: ``@dp.callback_query_handler(state='*')``
Said behavior is only observed in group chats (group privacy turned off for this bot). In bot chat filter works as expected
### Steps to Reproduce
Please provide detailed steps for reproducing the issue.
1. Register callback for inline keyboard query with state filter
2. Press inline keyboard button in group chat
3. Press is ignored
| 0easy
|
Title: feat: Add support for 5 min Solar and Wind forecast to the Energinet (DK.py) parser
Body: Energinet have 5 minute solar and wind forecasts available for DK-DK1 and DK-DK2.<br>In order to increase our data quality we would like to use these instead of the hourly forecasts we get from ENTSO-E currently.<br><br>These changes should be made to the `DK.py` parser and then the config files for DK-DK1 and DK-DK2 at `DK-DK1.yaml` and `DK-DK2.yaml` should be modified to use the new parser.<br>Documentation for the forecasts can be found here: [https://www.energidataservice.dk/tso-electricity/Forecasts_5Min](https://www.energidataservice.dk/tso-electricity/Forecasts_5Min) and it should use the **Forecast Current** value.<br><br>For any solution to be approved and merged it needs to use the parser classes defined in the repository for built in validation and error correction. There should also be a snapshot test added to prevent any future regressions of the code. | 0easy
|
Title: cached_path should handle "gs://" (Google Cloud Storage) URLs just like it does for "s3://" or "hf://"
Body: We could utilize https://github.com/googleapis/python-storage to make this robust.
| 0easy
|
Title: [Feature] Implement requests-like exception hierarchy
Body: **Is your feature request related to a problem? Please describe.**
Only CurlError and RequeestsError will be raised if anything happens.
**Describe the solution you'd like**
https://requests.readthedocs.io/en/latest/api/#exceptions
| 0easy
|
Title: [Feature request] Use protobuf's generator to generate pyi stubs
Body: ### Describe the feature
It looks like protobuf now offers a native way for generating pyi files: https://github.com/protocolbuffers/protobuf/blob/main/python/README.md#implementation-backends. Currently in ONNX we use a custom script. The official tool has the potential to be more correct and robust. | 0easy
|
Title: [DOCS] ProbWeightRegressor needs documentation
Body: We have the api [here](https://scikit-lego.readthedocs.io/en/latest/api/API%20Reference.html#sklego.linear_model.ProbWeightRegression) but it would be nice to add an example of it.
| 0easy
|
Title: πΈPrefocus field on some modals
Body: ## Feature Request
- share modal: focus on search input
| 0easy
|
Title: Array.to_list fails when length is a numpy integer in 2.5.2
Body: ### Version of Awkward Array
2.5.2
### Description and code to reproduce
```python
import awkward as ak, numpy as np
awk = ak.Array(np.ones((7, 0)))
form, length, container = ak.to_buffers(ak.to_packed(awk))
awk_from_buf = ak.from_buffers(form, np.int64(length), container)
awk_from_buf.to_list()
```
```pytb
TypeError Traceback (most recent call last)
Cell In[1], line 7
4 form, length, container = ak.to_buffers(ak.to_packed(awk))
5 awk_from_buf = ak.from_buffers(form, np.int64(length), container)
----> 7 awk_from_buf.to_list()
File ~/miniforge3/envs/scanpy-dev/lib/python3.11/site-packages/awkward/highlevel.py:496, in Array.to_list(self)
492 def to_list(self):
493 """
494 Converts this Array into Python objects; same as #ak.to_list.
495 """
--> 496 return self._layout.to_list(self._behavior)
File ~/miniforge3/envs/scanpy-dev/lib/python3.11/site-packages/awkward/contents/content.py:1086, in Content.to_list(self, behavior)
1085 def to_list(self, behavior: dict | None = None) -> list:
-> 1086 return self.to_packed()._to_list(behavior, None)
File ~/miniforge3/envs/scanpy-dev/lib/python3.11/site-packages/awkward/contents/regulararray.py:1464, in RegularArray.to_packed(self, recursive)
1462 index_nplike = self._backend.index_nplike
1463 length = self._length * self._size
-> 1464 content = self._content[: index_nplike.shape_item_as_index(length)]
1466 return RegularArray(
1467 content.to_packed(True) if recursive else content,
1468 self._size,
1469 self._length,
1470 parameters=self._parameters,
1471 )
File ~/miniforge3/envs/scanpy-dev/lib/python3.11/site-packages/awkward/_nplikes/array_module.py:323, in ArrayModuleNumpyLike.shape_item_as_index(self, x1)
321 return x1
322 else:
--> 323 raise TypeError(f"expected None or int type, received {x1}")
TypeError: expected None or int type, received 0
```
Notably, this doesn't throw an error in 2.5.1.
The issue may be a bit more specific than the title, but we're running into this on the anndata test suite with some of our round-tripping IO tests.
In our use case we are serializing to hdf5. We're storing the length directly in hdf5, so h5py reads it out as a numpy integer. This the causes errors post reconstruction since awkward doesn't recognize the numpy integer length as the correct type. | 0easy
|
Title: Implementing `page_size` option in list page
Body: In the list page we have a select field: `Show [10,25, 50, 100]` like that.
We need to change list pagination size by switching the size field.
This needs to send query like `/admin/users/list?page_size=X` | 0easy
|
Title: Fix == for Document
Body: # Context
The operator `==` is not working properly for `BaseDocument` even though it is implemented at Pydantic `BaseModel` level.
Example:
```python
class MyDoc(BaseDocument):
title: str
tensor: NdArray
a = MyDoc(title='hello', tensor=np.zeros(5), id = 1)
b = MyDoc(title='hello', tensor=np.zeros(5), id = 1)
assert a == b
```
```bash
assert a == b
File "pydantic/main.py", line 909, in pydantic.main.BaseModel.__eq__
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
```
Even without Tensor
```python
class MyDoc(BaseDocument):
title: str
a = MyDoc(title='hello')
b = MyDoc(title='hello')
assert a == b
```
```bash
>>> assert a == b
File "pydantic/main.py", line 909, in pydantic.main.BaseModel.__eq__
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
```
This is only working if the id match as well
```python
from docarray import BaseDocument
from docarray.typing import NdArray
import numpy as np
class MyDoc(BaseDocument):
title: str
a = MyDoc(title='hello', id = 1)
b = MyDoc(title='hello', id = 1)
assert a == b
```
## DocumentArray
we should support the same in DocumentArray
`da1 == da2` the same way python can do it for list, this is one of the reason `id` should not be checked
`da1 == [MyDoc() for _ in range(len(da1))]` should also work
# Solution
- [ ] We should not look at the value of `id` when doing `==`
- [ ] we should call ` ( tensor == tensor).all()` for the tensor field
- [ ] implement == at da level as well
| 0easy
|
Title: Add python 3.13 support
Body: I tried to install the development version on my Python 3.13 but I get this error:
```
$ sudo pip install -U git+https://github.com/twopirllc/pandas-ta.git@development
Collecting git+https://github.com/twopirllc/pandas-ta.git@development
Cloning https://github.com/twopirllc/pandas-ta.git (to revision development) to /tmp/pip-req-build-fmvbxe1l
Running command git clone --filter=blob:none --quiet https://github.com/twopirllc/pandas-ta.git /tmp/pip-req-build-fmvbxe1l
Running command git checkout -b development --track origin/development
Switched to a new branch 'development'
branch 'development' set up to track 'origin/development'.
Resolved https://github.com/twopirllc/pandas-ta.git to commit 438f3a97b71b26bc17efb1a204fb0e0fdc48e0aa
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Collecting numba>=0.59.0 (from pandas_ta==0.4.19b0)
Downloading numba-0.60.0.tar.gz (2.7 MB)
ββββββββββββββββββββββββββββββββββββββββ 2.7/2.7 MB 884.2 kB/s eta 0:00:00
Installing build dependencies ... done
Getting requirements to build wheel ... error
error: subprocess-exited-with-error
Γ Getting requirements to build wheel did not run successfully.
β exit code: 1
β°β> [24 lines of output]
Traceback (most recent call last):
File "/usr/lib/python3.13/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
main()
~~~~^^
File "/usr/lib/python3.13/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.13/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
File "/tmp/pip-build-env-_8llvsio/overlay/lib/python3.13/site-packages/setuptools/build_meta.py", line 334, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=[])
~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-_8llvsio/overlay/lib/python3.13/site-packages/setuptools/build_meta.py", line 304, in _get_build_requires
self.run_setup()
~~~~~~~~~~~~~~^^
File "/tmp/pip-build-env-_8llvsio/overlay/lib/python3.13/site-packages/setuptools/build_meta.py", line 522, in run_setup
super().run_setup(setup_script=setup_script)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-_8llvsio/overlay/lib/python3.13/site-packages/setuptools/build_meta.py", line 320, in run_setup
exec(code, locals())
~~~~^^^^^^^^^^^^^^^^
File "<string>", line 51, in <module>
File "<string>", line 48, in _guard_py_ver
RuntimeError: Cannot install on Python version 3.13.0; only versions >=3.9,<3.13 are supported.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
Γ Getting requirements to build wheel did not run successfully.
β exit code: 1
β°β> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
```
Please add support for this version of python.
Thanks | 0easy
|
Title: Status of library keywords that are executed in dry-run is `NOT RUN`
Body: Library keywords aren't in general executed in dry-run, but there are exceptions like `Import Library` and `Set Library Search Order`, and also `Run Keyword` its variants like `Run Keyword If` execute their child keywords. In all these cases the status of the actually executed library keyword is set to `NOT RUN`. It is changed to `FAIL` if the keyword fails, but if it passes it's impossible to see that the keyword actually was executed. | 0easy
|
Title: clarify in the docs that keys in the product field can be anything
Body: I got a user who asked me if their `pipeline.yaml` looked well and I noticed they had something like this:
```yaml
tasks:
# ... a bunch of tasks
- source: fit.py
product:
nb: path/to/report.html
data: path/to/data.csv
data1: path/to/data.csv
data2: path/to/data.csv
```
The user was confused and thought that the naming should be name, name1, name2. So we should add a little note in the basic concepts guide to explicitly say that keys can have any name. (e.g., data, data_clean, model, etc)
https://docs.ploomber.io/en/latest/get-started/basic-concepts.html
| 0easy
|
Title: dcc.Input bug with decimal values
Body: when using the input component with numbers and with a callback
import dash
from dash.dependencies import Input, Output
import dash_core_components as dcc
import dash_html_components as html
app = dash.Dash()
app.layout = html.Div([
dcc.Input(placeholder='Enter a value...',
type='number',
value=0,
step=0.01,
id='input_id',
style={'float': 'left'}),
html.Div(id='my-div')
])
@app.callback(
Output(component_id='my-div', component_property='children'),
[Input(component_id='input_id', component_property='value')]
)
def update_output_div(input_value):
return 'You\'ve entered "{}"'.format(input_value)
if __name__ == '__main__':
app.run_server()
Try entering decimal values with some 0s in the decimal part, for instance 0.001 or 4.1004:
When the callback is triggered, any ending 0 will be removed preventing you to finish writing the number | 0easy
|
Title: [BUG]How to set the ip address in the cluster mode
Body: Setting:
This is my first time using Mars and I face some problems maybe pretty easy for you.
I am using mars as distributed computation engine. when i set the cluster mode as the doc said. I am confused about the ip address here:
mars-worker -H <host_name> -p <worker_port> -s <supervisor_ip>:<supervisor_port>
mars.new_session('http://<web_ip>:<web_port>')
(from https://mars-project.readthedocs.io/zh_CN/latest/installation/deploy.html)
Question:
1. does the ip means external network ip address or internal network?
2. I tried the external network ip address and it failed.
3. when I use the internal network ip address, and I am using VSCode on my Mac and remote-develop on a Linux system, how to open the web browser of the Mars WebUI on my Mac with the Linux internal network ip addressοΌ
| 0easy
|
Title: Update Japanese docs to apply token rotation (#413) changes
Body: Update the Japanese docs with the same token rotation information that was introduced in #413 .
### The page URLs
* https://slack.dev/bolt-python/ja-jp/concepts#token-rotation
## Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/bolt-python/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| 0easy
|
Title: cuda requirement
Body: Is it possible to run this on a (recent) Mac, which does not support CUDA? I would have guessed setting --GPU 0 would not attempt to call CUDA, but it fails.
```
File "/Users/../Desktop/bopbtl/venv/lib/python3.7/site-packages/torch/cuda/__init__.py", line 61, in _check_driver
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
``` | 0easy
|
Title: Datapane Report templates
Body: **Is your feature request related to a problem? Please describe.**
- Users should have a sample report code available when using the CLI
**Describe the solution you'd like**
- A new command, `datapane report init` that generates a sample report, that upon running generates a simple report for local / publishing usage
**Describe alternatives you've considered**
- We could tie this into GH template repos to pull down templates from hosted repos that can be updated / improved by the wider community | 0easy
|
Title: [Feature request] Add apply_to_images to RandomGravel
Body: | 0easy
|
Title: Improvements for FeatureImportances visualizer
Body: There are a couple of enhancements for the `yellowbrick.features.FeatureImportances` visualizer that should be made to really make it stick out. They are as follows:
*Note to contributors: items in the below checklist don't need to be completed in a single PR; if you see one that catches your eye, feel to pick it off the list!*
- [ ] color negative coefs
- [ ] top n features to filter number displayed (both pos and neg)
- [ ] implement standard deviation for ensemble models
### Color negative coefs
The first item is relatively straightforward, currently, the bar chart is a single color, but it might be nice to show negative `coef_` values as a different color, e.g. blue for positive green for negative as below:

To do this, you'll have to create a color array to pass as the color argument, e.g.
```python
colors = np.array(['b' if v > 0 else 'g' for v in self.feature_importances_])
self.ax.barh(pos, self.feature_importances_, color=colors, align='center')
```
We should also create arguments to provide a way to specify the colors.
### Top N Features
For the second item, I'm picturing something similar to [most informative features with scikit-learn](https://stackoverflow.com/questions/11116697/how-to-get-most-informative-features-for-scikit-learn-classifiers) (though not exactly this code). Here, an argument `topn` which defaults to None specifies a filter to only plot the `N` best features.
This should also be relatively straightforward, but gets complicated in the case of negative values. We have two options, we can rank all values including negative values and plot the N best values either positive or negative, or we can do the N best positive and N best negative coefficients
### Standard Deviation for Ensembles
Ensemble models like Random Forest and Gradient Boosting have an underlying `estimators_` attribute that describes each feature's importance in a different way. The global feature importances are the mean, but it would be nice to add an `xerr` bar with the standard deviation as in [plot forest importances](http://scikit-learn.org/stable/auto_examples/ensemble/plot_forest_importances.html).
This could also be useful for `CV` models that also have an underlying `estimators_` attribute.
The idea with this one is to compute the standard deviations for each feature using `estimators_` and np.std, and storing the value in a `confidences_` attribute during fit. Note that it will also have to be sorted using the `sort_index` -- the confidences are drawn during `ax.barh` with `xerr=self.confidences_`.
Right now it looks like the example above is no longer working exactly as expected, so some deeper review is necessary.
See also #194 where a discussion about tree-specific feature importances is ongoing. | 0easy
|
Title: Programming Language Distribution metric API
Body: The canonical definition is here: https://chaoss.community/?p=3430 | 0easy
|
Title: Licenses Declared metric API
Body: The canonical definition is here: https://chaoss.community/?p=3963 | 0easy
|
Title: Change Requests Accepted metric API
Body: The canonical definition is here: https://chaoss.community/?p=3589 | 0easy
|
Title: remove phantomjs support
Body: With chrome and firefox headless phantom is not needed anymore. Selenium has removed phantom too | 0easy
|
Title: Add segmentation + classification head fine-tuning example
Body: It would be great to have more examples for fine-tuning in the library!
Similar to current examples for [binary-segmentaion](https://github.com/qubvel-org/segmentation_models.pytorch/blob/main/examples/binary_segmentation_intro.ipynb) and [multi-label](https://github.com/qubvel-org/segmentation_models.pytorch/blob/main/examples/cars%20segmentation%20(camvid).ipynb) segmentation would be great to have example for segmentation + classification head. The same `camvid` or `oxford pet` dataset can be used. As a classification target for these datasets, the "existence" of the class on the image can be used.
The example should showcase how to
- fine-tune a model with pytorch-lightning (or any other fine-tuning framework, even a plain pytorch)
- use loss function for segmentation and classification
- compute metrics for classification and segmentation heads
- visualize results
[Docs on how to create a model with classification head.](https://smp.readthedocs.io/en/latest/insights.html#aux-classification-output)
In case anyone wants to work on this you are welcome! Just notify in this issue, and then ping me to review a PR when it's ready. | 0easy
|
Title: during manual local testing, the processes are not killed if the test fails
Body: We need to terminate the processes if test fails for whatsoever reason:
## Current
```python
def test_e2e_default_batching(killall):
process = subprocess.Popen(
["python", "tests/e2e/default_batching.py"],
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
stdin=subprocess.DEVNULL,
)
time.sleep(5)
resp = requests.post("http://127.0.0.1:8000/predict", json={"input": 4.0}, headers=None)
assert resp.status_code == 200, f"Expected response to be 200 but got {resp.status_code}"
assert resp.json() == {"output": 16.0}, "tests/simple_server.py didn't return expected output"
killall(process)
```
## Proposed
```py
def test_e2e_default_batching(killall):
process = subprocess.Popen(
["python", "tests/e2e/default_batching.py"],
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
stdin=subprocess.DEVNULL,
)
time.sleep(5)
try:
resp = requests.post("http://127.0.0.1:8000/predict", json={"input": 4.0}, headers=None)
assert resp.status_code == 200, f"Expected response to be 200 but got {resp.status_code}"
assert resp.json() == {"output": 16.0}, "tests/simple_server.py didn't return expected output"
except Exception as e:
raise e
finally: # kill the process before raising the exception
killall(process)
```
---
_Originally posted by @bhimrazy in https://github.com/Lightning-AI/LitServe/issues/119#issuecomment-2141734005_
| 0easy
|
Title: [ENH] Allow substring and regex matching in find_replace
Body: # Brief Description
<!-- Please provide a brief description of what you'd like to propose. -->
Currently `find_replace` only allows exact matching. I have run into use cases for fuzzy/substring match and replace both in my work and in the process of making a pyjanitor example. Was able to implement it by combining pyjanitor function `update_where` and pandas-flavor, but we potentially could make this as a pyjanitor feature--either as part of `find_replace` (for example, by adding a kwarg flag `exact=True/False`), or as a separate function `fuzzy_find_replace`.
# Example API
<!-- One of the selling points of pyjanitor is the API. Hence, we guard the API very carefully, and want to
make sure that it is accessible and understandable to many people. Please provide a few examples of what the API
of the new function you're proposing will look like. We have provided an example that you should modify. -->
For example, in the example [Tidy Up Web-Scraped Media Franchise Data](https://pyjanitor.readthedocs.io/notebooks/medium_franchise.html), when trying to implement a python equivalence of the following R snippet:
```R
clean_df <- clean_category %>%
mutate(revenue_category = case_when(
str_detect(str_to_lower(revenue_category), "box office") ~ "Box Office",
str_detect(str_to_lower(revenue_category), "dvd|blu|vhs|home video|video rentals|video sales|streaming|home entertainment") ~ "Home Video/Entertainment",
str_detect(str_to_lower(revenue_category), "video game|computer game|mobile game|console|game|pachinko|pet|card") ~ "Video Games/Games",
str_detect(str_to_lower(revenue_category), "comic|manga") ~ "Comic or Manga",
str_detect(str_to_lower(revenue_category), "music|soundtrack") ~ "Music",
str_detect(str_to_lower(revenue_category), "tv") ~ "TV",
str_detect(str_to_lower(revenue_category), "merchandise|licens|mall|stage|retail") ~ "Merchandise, Licensing & Retail",
TRUE ~ revenue_category))
```
A mapper dictionary could be set up as:
```python
mapper = {
'box office': 'Box Office',
'dvd|blu|vhs|home video|video rentals|video sales|streaming|home entertainment': 'Home Video/Entertainment',
'video game|computer game|mobile game|console|game|pachinko|pet|card': 'Video Games/Games',
'comic|manga': 'Comic or Manga',
'music|soundtrac': 'Music',
'tv': 'TV',
'merchandise|licens|mall|stage|retail': 'Merchandise, Licensing & Retail',
}
```
If we enable substring matching and regex, the find_replace call could look like this:
```python
df.find_replace(column_name, mapper, exact=False, regex=True)
```
If this feature extension could clutter `find_replace`, a separate function could be set up. | 0easy
|
Title: Make `InsertionNoiseModel` serializable
Body: **Is your feature request related to a use case or problem? Please describe.**
Currently, `InsertionNoiseModel` is not serializable, while `NoiseModel` is. It would be useful if `InsertionNoiseModel` was serializable as well.
Calling `cirq.to_json(noise_model)` where `noise_model` is a `InsertionNoiseModel` currently raises the following error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[...]
File /usr/lib/python3.10/json/encoder.py:179, in JSONEncoder.default(self, o)
160 def default(self, o):
161 """Implement this method in a subclass such that it returns
162 a serializable object for ``o``, or calls the base implementation
163 (to raise a ``TypeError``).
(...)
177
178 """
--> 179 raise TypeError(f'Object of type {o.__class__.__name__} '
180 f'is not JSON serializable')
TypeError: Object of type InsertionNoiseModel is not JSON serializable
```
**Describe the solution you'd like**
Being able to call `cirq.to_json(noise_model)` where `noise_model` is a `InsertionNoiseModel`.
**What is the urgency from your perspective for this issue? Is it blocking important work?**
<!-- Please choose one and remove the others -->
P1 - Not urgent but would be nice to be able to serialize these noise models
| 0easy
|
Title: merge_schemas() does not add the "@stitch" directive definition to the merged schema AST
Body: When merging two or more schemas with `merge_schemas()` (in `schema_transformation.merge_schemas.py`), the generated `MergedSchemaDescriptor` object contains a `schema_ast` field with an AST for the merged schema. This AST contains uses of the `@stitch` directive, but no definition for that directive β and is therefore invalid. | 0easy
|
Title: RoBERTa on SuperGLUE's Broadcoverage Diagnostics task
Body: Broadcoverage Diagnostics is one of the tasks of the [SuperGLUE](https://super.gluebenchmark.com) benchmark. The task is to re-trace the steps of Facebook's RoBERTa paper (https://arxiv.org/pdf/1907.11692.pdf) and build an AllenNLP config that reads the Broadcoverage Diagnostics data and fine-tunes a model on it. We expect scores in the range of their entry on the [SuperGLUE leaderboard](https://super.gluebenchmark.com/leaderboard).
This can be formulated as a classification task, using the [`TransformerClassificationTT`](https://github.com/allenai/allennlp-models/blob/Imdb/allennlp_models/classification/models/transformer_classification_tt.py) model, analogous to the IMDB model. You can start with the [experiment config](https://github.com/allenai/allennlp-models/blob/Imdb/training_config/tango/imdb.jsonnet) and [dataset reading step](https://github.com/allenai/allennlp-models/blob/Imdb/allennlp_models/classification/tango/imdb.py#L13) from IMDB, and adapt them to your needs. | 0easy
|
Title: [FEA] modularly-weighted layouts
Body: **Is your feature request related to a problem? Please describe.**
Often we have modular structure annotated with a graph, like community scores, and we'd like the layout to reflect it more than a basic force-directed layout would, see image below
**Describe the solution you'd like**
```python
# auto modes
g.layout_modular().plot()
g.layout_modular(algorithm='leiden').plot()
# controls
g.compute_cugraph('leiden').layout_modular(
node_attr='leiden',
algorithm=None,
engine='gpu',
intra_edge_weight=3.0
).plot()
```
This would translate to something like:
```python
def layout_modular(
g: Plottable,
node_attr: Optional[str] =None,
algorithm: Optional[str]='leiden',
engine: Literal['cpu', 'gpu', 'any'] = 'any',
intra_edge_weight=2.0,
inter_edge_weight=0.5,
edge_influence=2.0
) -> Plottable:
if node_attr is None:
# compute community via igraph or cugraph algorithm depending on env/engine/etc
node_attr = algorithm
...
e_annotated = annotate_src_dst_with_node_attr(g._edges, node_attr, 'src_attr', 'dst_attr')
e_weighted = g._edges.assign(weight=(e_annotated['src_attr'] == e_annotated['dst_attr']).map({
True: intra_edge_weight,
False: inter_edge_weight
})
g2 = g.edges(e_weighted).settings(url_params={'edgeInfluence': 2})
return g
```
It should work in both pandas + cudf modes
**Additional context**
* binding edge weights in pygraphistry:
https://github.com/graphistry/pygraphistry/blob/master/demos/more_examples/graphistry_features/edge-weights.ipynb
* expected output:

| 0easy
|
Title: [New feature] Add apply_to_images to CLAHE
Body: | 0easy
|
Title: Mermaid improvements
Body: Currently, mermaid is not bundled in solara-assets, which gives 404 errors on airgapped or firewalled environments.
Also, we should try to match what nbconvert is doing so experiences with nbconvert, voila and solara are similar: https://github.com/jupyter/nbconvert/pull/1957 | 0easy
|
Title: [LineZone] - flip in/out line crossing directions
Body: ## Description
Between `supervision-0.17.0` and `supervision-0.18.0`, releases in/out of the direction of the crossing were accidentally changed. Given that `LineZone` is one of the oldest features we have we do not want to make life difficult for users and want to restore the previous behavior. The change made in this [PR](https://github.com/roboflow/supervision/pull/735), most likely in this [line](https://github.com/roboflow/supervision/blob/0ccb0b85adee4202f5fe96834a374a057bbbd9da/supervision/detection/line_counter.py#L140), is responsible for the change in behavior.
https://github.com/roboflow/supervision/blob/0ccb0b85adee4202f5fe96834a374a057bbbd9da/supervision/detection/line_counter.py#L140
### Minimal Reproducible Example
You can easily confirm the crossing direction change between `supervision-0.17.0` and `supervision-0.18.0` releases using this [notebook](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/how-to-track-and-count-vehicles-with-yolov8-and-supervison.ipynb). Here are example results.
__supervision-0.17.0__
https://github.com/roboflow/supervision/assets/26109316/32e0f95c-9204-4703-ab25-c2255a597720
__supervision-0.18.0__
https://github.com/roboflow/supervision/assets/26109316/af6db77e-24f8-4338-9925-3c80afe178f8
### Additional
- Note: Please share a Google Colab with minimal code to test the new feature. We know it's additional work, but it will speed up the review process. The reviewer must test each change. Setting up a local environment to do this is time-consuming. Please ensure that Google Colab can be accessed without any issues (make it public). Thank you! ππ» | 0easy
|
Title: Demos: Move demo gifs to svg based demos
Body: <!-- Please use the appropriate issue title format:
BUG REPORT
Bug: {Short description of bug}
SUGGESTION
Suggestion: {Short description of suggestion}
OTHER
{Question|Discussion|Whatever}: {Short description} -->
## Description
Currently, we make use of gif demos, which is cool, but not as cool as `svg-term-cli`. Need to explore and move demos to svg based terminal sessions to improve documentation. | 0easy
|
Title: Implement proper destroying device context in .draw_outline() method
Body: This issue was raised in #552. The idea is pretty obvious: proper destroying device context returned by function `CreateDC`. | 0easy
|
Title: Crash if listener executes library keyword in `end_test` in the dry-run mode
Body: The 'NOT RUN' status is not part of the highlighter causing a `KeyError` to occur if the test case status is being logged to the console.
Missing key here:
https://github.com/robotframework/robotframework/blob/445ea971e43bb974392fe0de8bbcf8be88a6eb08/src/robot/output/console/highlighting.py#L101
Could, for example, be called here:
https://github.com/robotframework/robotframework/blob/445ea971e43bb974392fe0de8bbcf8be88a6eb08/src/robot/output/console/verbose.py#L52
https://github.com/robotframework/robotframework/blob/445ea971e43bb974392fe0de8bbcf8be88a6eb08/src/robot/output/console/verbose.py#L118 | 0easy
|
Title: [DOCS]: Unclear what to do with unneeded options in YAML files
Body: ### Affected documentation section
2. work_preferences.yaml; 3. plain_text_resume.yaml
### Documentation improvement description
I am unclear as to what I should be doing with unneeded options in the two YAML Files: _"work_preferences.yaml"_ and _"plain_text_resume.yaml"._
For example, `plain_text_resume.yaml`, in the `personal_information` section :
```
personal_information:
name: "[Your Name]"
surname: "[Your Surname]"
date_of_birth: "[Your Date of Birth]"
country: "[Your Country]"
city: "[Your City]"
address: "[Your Address]"
zip_code: "[Your zip code]"
phone_prefix: "[Your Phone Prefix]"
phone: "[Your Phone Number]"
email: "[Your Email Address]"
github: "[Your GitHub Profile URL]" # Un-needed (Github profile, in my case, is not required, and would be odd to include)
linkedin: "[Your LinkedIn Profile URL]"
```
What do I do to omit the inclusion of the GitHub Profile URL field/option?
Can I just comment out the ones I don't need?
Can they just be removed, without affecting the main program logic?
Can I mark them with something like n/a, and they AI will figure out what to do?
I may have missed something, but this is unclear to me!
### Why is this change necessary?
If not included, and I haven't missed it, including this information will make the project more accessible to a larger audience
### Additional context
_No response_ | 0easy
|
Title: Row-wise weighted mean gives incorrect results
Body: ### Version of Awkward Array
2.6.9
### Description and code to reproduce
I am doing a row-wise weighted mean in an awkward array and getting wrong results.
Here's an MRE with outputs in the comments:
```python
import awkward as ak
data = ak.Array(
[
[1, 2, 3],
[4, 5],
]
)
weight = ak.Array(
[
[1, 1, 2],
[1, 10],
]
)
# manual row-by-row - expected results
print(ak.mean(data[0], weight=weight[0])) # -> 2.25
print(ak.mean(data[1], weight=weight[1])) # -> 4.909090909090909
# manual vectorized - expected results
weights_norm = weight / ak.sum(weight, axis=1)
print(ak.sum(weights_norm * data, axis=1)) # -> [2.25, 4.91]
# the most natural call I expected to work - incorrect result in the 2nd row
print(ak.mean(data, weight=weight, axis=1)) # -> [2.25, 13.5]
``` | 0easy
|
Title: add json-array to save binaries
Body: # Context
DocList.save_binary does not support json protocol. It should IMO | 0easy
|
Title: Option to add a custom loader or disable completely
Body: | 0easy
|
Title: Date and Time Picker Component
Body: Hi,
It would be nice to have a `Date and Time Picker` if it doesn't exist yet. I couldn't find it in APIs liste.
Thank You. | 0easy
|
Title: Issue Response Time metric API
Body: The canonical definition is here: https://chaoss.community/?p=3631 | 0easy
|
Title: Docs don't render LaTeX formulas
Body: ### π Documentation
Hi!
I was reading the document and found the equation display error. Here is the [link](https://lightning.ai/docs/pytorch/stable/notebooks/course_UvA-DL/03-initialization-and-optimization.html#Initialization) and the screenshot.
<img width="1391" alt="ζͺε±2024-03-15 14 42 57" src="https://github.com/Lightning-AI/pytorch-lightning/assets/72594017/80c3a8a0-7e61-4a86-8d09-b5dedc8cc56b">
cc @borda | 0easy
|
Title: Conversion to JSON failure
Body: **Description of the issue**
Any circuit with the inverse QubitPermutation gate added will fail if it is loaded from JSON.
**How to reproduce the issue**
```python
import cirq as c
qubits = c.LineQubit.range(3)
circuit = c.Circuit()
circuit.append(c.QubitPermutationGate(permutation=[0,1,2])(qubits[0], qubits[1], qubits[2])**-1)
print(circuit)
json_text = c.to_json(circuit)
print(json_text)
circuit = c.read_json(json_text=json_text)
print(circuit)
```
<details>
```bash
0: βββ[0>0]ββββββ
β
1: βββ[1>1]ββββββ
β
2: βββ[2>2]^-1βββ
{
"cirq_type": "Circuit",
"moments": [
{
"cirq_type": "Moment",
"operations": [
{
"cirq_type": "GateOperation",
"gate": {
"cirq_type": "_InverseCompositeGate"
},
"qubits": [
{
"cirq_type": "LineQubit",
"x": 0
},
{
"cirq_type": "LineQubit",
"x": 1
},
{
"cirq_type": "LineQubit",
"x": 2
}
]
}
]
}
]
}
Traceback (most recent call last):
File "/Users/xxx/json_failure.py", line 11, in <module>
circuit = c.read_json(json_text=json_text)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xxx/cirqenv/lib/python3.11/site-packages/cirq/protocols/json_serialization.py", line 561, in read_json
return json.loads(json_text, object_hook=obj_hook)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/json/__init__.py", line 359, in loads
return cls(**kw).decode(s)
^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/json/decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xxx/cirqenv/lib/python3.11/site-packages/cirq/protocols/json_serialization.py", line 352, in __call__
cls = factory_from_json(cirq_type, resolvers=self.resolvers)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xxx/cirqenv/lib/python3.11/site-packages/cirq/protocols/json_serialization.py", line 431, in factory_from_json
raise ValueError(f"Could not resolve type '{type_str}' during deserialization")
ValueError: Could not resolve type '_InverseCompositeGate' during deserialization
```
</details>
**Cirq version**
You can get the cirq version by printing `cirq.__version__`. From the command line:
```
Cirq: 1.4.1
```
| 0easy
|
Title: Should not try to get_node_path() if SSR mode is disabled.
Body: ### Describe the bug
In Gradio code, the lines https://github.com/gradio-app/gradio/blob/54fd90703e74bd793668dda62fd87c4ef2cfff03/gradio/blocks.py#L2560 and https://github.com/gradio-app/gradio/blob/54fd90703e74bd793668dda62fd87c4ef2cfff03/gradio/routes.py#L1737 call `get_node_path()` prematurely. The call to `get_node_path()` should only happen only if SSR mode is set to true.
This is because this call breaks the application from launching if `get_node_path()` fails. The `get_node_path()` call fails due to failing to launch a subprocess (calling `which` to check the path of `node`) because it is not allowed. Note that SSR mode is set to false and is not required. This is a very niche use-case but this can happen, for instance, if the app is running inside a trusted platform module where forking new processes will fail.
I suggest a change along the following lines. I can submit a pull request if this is okay.
```
self.node_path = os.environ.get(
"GRADIO_NODE_PATH", "" if wasm_utils.IS_WASM else get_node_path()
)
```
be moved to within the following if block `if self.ssr_mode:`.
### Have you searched existing issues? π
- [x] I have searched and found no existing issues
### Reproduction
It is less to do with the code that launches Gradio and more to do with the environment where it is launched.
### Screenshot
_No response_
### Logs
```shell
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Linux
gradio version: 5.20.0
gradio_client version: 1.7.2
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.8.0
audioop-lts is not installed.
fastapi: 0.115.11
ffmpy: 0.5.0
gradio-client==1.7.2 is not installed.
groovy: 0.1.2
httpx: 0.28.1
huggingface-hub: 0.29.1
jinja2: 3.1.5
markupsafe: 2.1.5
numpy: 2.0.2
orjson: 3.10.15
packaging: 24.2
pandas: 2.2.2
pillow: 11.1.0
pydantic: 2.10.6
pydub: 0.25.1
python-multipart: 0.0.20
pyyaml: 6.0.2
ruff: 0.9.9
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.46.0
tomlkit: 0.13.2
typer: 0.15.1
typing-extensions: 4.12.2
urllib3: 2.3.0
uvicorn: 0.34.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2025.2.0
httpx: 0.28.1
huggingface-hub: 0.29.1
packaging: 24.2
typing-extensions: 4.12.2
websockets: 15.0
```
### Severity
Blocking usage of gradio | 0easy
|
Title: Add license declaration in setup.py
Body: Please add the `license="BSD"` classifier in setup.py. This way automated tools can extract the license type. | 0easy
|
Title: Marketplace - creator page - text is unreadable in dark mode, change color to white
Body: ### Describe your issue.
<img width="1630" alt="Screenshot 2024-12-17 at 19 19 08" src="https://github.com/user-attachments/assets/0fe97122-5a51-4350-ab39-dc7d8797f2ac" />
This text is unreadable because of the color when in dark mode. Please change to this color when in dark mode:
background: var(--zinc-50, #FAFAFA);
| 0easy
|
Title: [k8s] Raise socat/netcat dependency requirements in `sky check`
Body: Currently we ask users to install netcat/socat when they run `sky launch`. Worse, if neither of the dependencies is installed, our error messages do not tell the user that both need to be installed, making it a sequential process:
```
sky check kubernetes
# Passes
sky launch -c test --cloud k8s -- echo hi
# Asks me to install socat
# I run brew install socat and try again:
sky launch -c test --cloud k8s -- echo hi
# Now asks me to install netcat
# Works after netcat is installed
```
We should have one command to install both if neither is installed, and move it to `sky check` since it is a hard requirement. | 0easy
|
Title: TechEmpower benchmarks
Body: Thisβd be a good one for a contributor to jump on.
Adding TechEmpower benchmarks for Responder. Iβd suggest copying the starlette case, and adapting it accordingly. https://github.com/TechEmpower/FrameworkBenchmarks/tree/master/frameworks/Python/starlette
PR would be against the TechEmpower repo, not here. They run continuos benchmarks against the suite, so the performance section could be updated once thereβs a run that includes Responder.
Youβll want to use an async DB connector, as per the starlette case. | 0easy
|
Title: Add py.typed
Body: We still don't ship `py.typed` for some reason, even though many of the public APIs are typed. | 0easy
|
Title: Improve error message when a meta-field is queried but not built-in or present in the schema
Body: | 0easy
|
Title: DOC: Add comparison with SPSS
Body: ### Pandas version checks
- [x] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://pandas.pydata.org/docs/getting_started/comparison/index.html
### Documentation problem
I teach pandas, and have a number of students that come in with experience in [SPSS](https://en.wikipedia.org/wiki/SPSS). We have comparison guides with a number of other related tools, but none that speaks to SPSS.
### Suggested fix for documentation
Add a "Comparison to SPSS" page | 0easy
|
Title: `raise_for_status` now raises an exception for 1xx and 3xx responses
Body: Hi!
I've just noticed that `response.raise_for_status()` now raises an exception for 1xx and 3xx requests. I don't think it was the case before (namely with version 0.18) and the documentation doesn't seem to have been updated here: https://www.python-httpx.org/exceptions/
> The HTTPStatusError class is raised by response.raise_for_status() on 4xx and 5xx responses.
### Steps to reproduce (even though I think this is an intentional behavior, the doc is just not up to date?)
```python
import httpx
url_301 = "https://stat.pagesjaunes.fr/redirect?target=ETvxvbcLWawl69YzLe_Ph4vo03E_Yj6_Jt1_EvjjbHFaCE3c9RGLEg==&v=2.0&p1=1&p2=1&p3=&p4=RESEAUSOC-SITE-LVS-GRATUIT&p5=LR&p6=1361647273529289141447273529289&p7=06486910&p8="
httpx.get(url_301, follow_redirects=False).raise_for_status() | 0easy
|
Title: [Feature] Add body to response.request
Body: **Is your feature request related to a problem? Please describe.**
Just a compatibility problem with the existing requests package in Python. If we POST a request then the response has a request object as an attribute and being a POST, that request object has a body attribute (being the posted data). I love this package, thanks enormously! This would help it slot in as a simple replacement for request, but may of course be one of many compatibility issues in the end. Just one I found when I swapped to this package in an existing script that POSTs and checks response.request.body.
**Describe the solution you'd like**
That the request recorded against a response contain all the request attributes ...
**Describe alternatives you've considered**
I just take it from the request I sent ;-) | 0easy
|
Title: time series part 1: weighted_average, weights order
Body: In the `weighted_average` function [here](https://mlcourse.ai/articles/topic9-part1-time-series/), larger weights need to be assigned to more recent observations, however, that's not the case for the implementation. | 0easy
|
Title: Move to newer urls function to avoid Django 4.0 deprecation warning
Body: When running the test suite there are a number of warnings of the form
```
demo/tests/test_dpd_demo.py::test_template_tag_use
/home/mark/local/django-plotly-dash/demo/demo/urls.py:38: RemovedInDjango40Warning:
```
These can be removed by modifying the code to use the newer functions.
| 0easy
|
Title: Add Flower Baseline: FedBABU
Body: ### Paper
FedBABU: Towards Enhanced Representation for Federated Image Classification
### Link
https://arxiv.org/abs/2106.06042
### Maybe give motivations about why the paper should be implemented as a baseline.
FedBABU (200+ citations) is quite famous in pFL field and is considered as one of the most common baselines that introduced by many pFL papers.
The plan is reproducing FedBABU and FedAvg experiment results in Table 5 in the paper.

### Is there something else you want to add?
I've reviewed the PR and issue list and found no ongoing work on FedBABU. I've already begun the reproduction process.
### Implementation
#### To implement this baseline, it is recommended to do the following items in that order:
### For first time contributors
- [ ] Read the [`first contribution` doc](https://flower.ai/docs/first-time-contributors.html)
- [ ] Complete the Flower tutorial
- [ ] Read the Flower Baselines docs to get an overview:
- [ ] [How to use Flower Baselines](https://flower.ai/docs/baselines/how-to-use-baselines.html)
- [ ] [How to contribute a Flower Baseline](https://flower.ai/docs/baselines/how-to-contribute-baselines.html)
### Prepare - understand the scope
- [X] Read the paper linked above
- [X] Decide which experiments you'd like to reproduce. The more the better!
- [X] Follow the steps outlined in [Add a new Flower Baseline](https://flower.ai/docs/baselines/how-to-contribute-baselines.html#add-a-new-flower-baseline).
- [X] You can use as reference [other baselines](https://github.com/adap/flower/tree/main/baselines) that the community merged following those steps.
### Verify your implementation
- [X] Follow the steps indicated in the `EXTENDED_README.md` that was created in your baseline directory
- [X] Ensure your code reproduces the results for the experiments you chose
- [X] Ensure your `README.md` is ready to be run by someone that is no familiar with your code. Are all step-by-step instructions clear?
- [X] Ensure running the formatting and typing tests for your baseline runs without errors.
- [X] Clone your repo on a new directory, follow the guide on your own `README.md` and verify everything runs. | 0easy
|
Title: Add link to file location question from FAQ
Body: Issue #58 concerns the location of files containing dash code (ie for layout and callbacks and the like). This discussion, or a summary of it, belongs in the FAQ
| 0easy
|
Title: Parsing errors in reStructuredText files have no source
Body: If there's a parsing error in reST file, the reported error looks like this:
> Error in file 'None' on line 4: Non-existing setting 'Bad Setting'.
Luckily fixing the source is easy.
Another problem is that the line number isn't the actual line in the source file but a line in the parsed code block. Fixing that would be a lot harder and I don't consider it worth the effort at least now. | 0easy
|
Title: Make console report pretty
Body: To make [console report](https://github.com/scanapi/scanapi/blob/master/scanapi/templates/markdown.jinja) prettier. And to add response time to it.
This is how it looks like now:
```
ScanAPI Report: Console
=======================
GET http://demo.scanapi.dev/api/health/ - 200
GET http://demo.scanapi.dev/api/languages/ - 200
GET http://demo.scanapi.dev/api/devs/ - 200
GET http://demo.scanapi.dev/api/devs/?newOpportunities=True - 200
GET http://demo.scanapi.dev/api/devs/?newOpportunities=False - 200
POST http://demo.scanapi.dev/api/devs/ - 201
GET http://demo.scanapi.dev/api/devs/129e8cb2-d19c-51ad-9921-cea329bed7fa - 404
GET http://demo.scanapi.dev/api/devs/129e8cb2-d19c-41ad-9921-cea329bed7f0 - 200
DELETE http://demo.scanapi.dev/api/devs/129e8cb2-d19c-41ad-9921-cea329bed7f0 - 200
GET http://demo.scanapi.dev/api/devs/129e8cb2-d19c-41ad-9921-cea329bed7f0/languages - 200
```
Maybe to add some different colours to each HTTP method?
π | 0easy
|
Title: Add labels for repo2docker version and repo
Body: it would be handy to add the labels for repo2docker version, repo URL, and ref to images when we build them:
```docker
LABEL repo2docker.version="x.y.z"
LABEL repo2docker.repo="https://..."
LABEL repo2docker.ref="abc123"
```
This records in docker image metadata info about what was built, which could be useful in the future. | 0easy
|
Title: Run librephotos from URL subdirectory
Body: If librephotos is installed along other apps on same server or running behind reverse proxy it will be good to be able to run it from a subdirectory. Now it is running always from / and will be good to have option to run from /librephotos, for example.
I tried to use librephotos docker images behind apache ssl terminating reverse proxy as https://example.com/librephotos, but api calls and almost any other urls are broken even if I use ProxyHTMLURLMap in apache.
| 0easy
|
Title: Question regarding precision@n and roc(@n?)
Body: Hello,
first and foremost, thank you for building this wrapper it is of great use for me and many others.
I have question regarding the evaluation:
Most outlier detection evaluation settings work by setting the ranking number n equal the number of outliers (aka contamination) and so did I in my experiments.
My thought concerning the ROC and AUC score was:
1. Don't we have to to rank the outlier scores from highest to lowest and evaluate ROC only on the n numbers. Thus, needing a ROC@n curve?
2. Why do people use ROC and AUC for outlier detection problems which by nature are heavily skewed and unbalanced. Hitting a lot of true negatives is easy and guaranteed, if the algorithms knows that there only n numbers of outliers.
In my case the precision@n of my chosen algorithms are valued in the range of 0.2-0.4 because it is a difficult dataset. However, the AUC score is quite high at the same.
I would appreciate any thoughts on this since I am fairly new to the topic and might not grasp the intuition of the ROC curve for this task.
Best regards
Hlam | 0easy
|
Title: [ENH] `polars` schema checks - address performance warnings
Body: The current schema checks for lazy `polars` based data types raise performance warnings, e.g.,
```
sktime/datatypes/tests/test_check.py::test_check_metadata_inference[Table-polars_lazy_table-fixture:1]
/home/runner/work/sktime/sktime/sktime/datatypes/_adapter/polars.py:234: PerformanceWarning: Determining the width of a LazyFrame requires resolving its schema, which is a potentially expensive operation. Use `LazyFrame.collect_schema().len()` to get the width without this warning.
metadata["n_features"] = obj.width - len(index_cols)
```
These should be addressed.
The tests to execute to check whether these warnings persist are those in the `datatypes` module - these are automatically executed for a change in the impacted file, on remote CI. | 0easy
|
Title: AIP-84 Custom list assets endpoint with latest event
Body: ### Description
Right now, the functionality of the Assets List in the UI is limited. We should create a custom GET `/ui/assets` endpoint to make it more useful.
First, we should add `latest_asset_event` to the response object and allow the list of assets to be sorted by the latest asset event timestamp.
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| 0easy
|
Title: Lintian warning: icon-size-and-directory-name-mismatch autokey.png 92x91
Body: ## Classification
Bug or UI?
## Summary
Before I forget to report: Debianβs Lintian tool reports the following warning:
> W: autokey-qt: icon-size-and-directory-name-mismatch usr/share/icons/hicolor/96x96/apps/autokey.png 92x91
N:
N: The icon has a size that differs from the size specified by the name
N: of the directory under which it was installed. The icon was probably
N: mistakenly installed into the wrong directory.
N:
N: Severity: normal, Certainty: certain
N:
N: Check: desktop/icons, Type: binary, udeb
N:
## Notes
I noticed that config/autokey.svg is 96Γ96, so perhaps regenerating its PNG counterpart making sure it is 96Γ96 instead of 92Γ91 would do the trick? | 0easy
|
Title: Crypto vs fiat
Body: It would be really interesting see a single chart that shows all crypto-fiat pairs on a single chart. The axes would need to be converted into a single domain like implied BTC market price and order value in USD to make them all accessible on a single plot, but the larger data set might give a clearer picture of the overall status of the market. | 0easy
|
Title: Show link to report in CLI
Body: Is it possible to insert a link to open the report in the end of CLI message?

| 0easy
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.