text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: [BUG] Links being followed on example tests
Body: ### Checklist
- [x] I checked the [FAQ section](https://schemathesis.readthedocs.io/en/stable/faq.html#frequently-asked-questions) of the documentation
- [x] I looked for similar issues in the [issue tracker](https://github.com/schemathesis/schemathesis/issues)
- [x] I am using the latest version of Schemathesis
### Describe the bug
I see that links are being followed even when i specify `--hypothesis-phases=explicit`
minor issue, but even if filter out API using `--include-name` they are still followed
guessing `--include-name` means run only that and not follow links?!
or maybe not follow them when `--hypothesis-phases=explicit`
understand them being followed for stateful testing however
the problem with links being followed on stateless/example testing is for example:
assume `POST /trees` is linked to `DELETE /trees/id` with the id from creation used for the delete.
when `--hypothesis-phase=explicit` then the state/id from POST call is _not_ used for the DELETE
instead any explicit example id under `DELETE /trees/id` are used for the linked test.
hope my explanation is clear with this example
### To Reproduce
1. Run this command `st run --include-name 'POST /trees' tree-openapi20.yaml --base-url=http://localhost:8083 --hypothesis-phases=explicit`
2. See links are followed
Please include a minimal API schema causing this issue:
`any schema with links and examples`
### Expected behavior
_imho_
1. links not followed when `--hypothesis-phases=explicit` specified
2. `--include-name` honored and linked not followed if not in the inclusion
### Environment
`platform Linux -- Python 3.9.18, schemathesis-3.33.1, hypothesis-6.108.2, hypothesis_jsonschema-0.23.1, jsonschema-4.23.0`
| 0easy
|
Title: 27 tests fail
Body: ### 🐛 Describe the bug
[log](https://freebsd.org/~yuri/py311-torch-geometric-2.6.0-test.log)
### Versions
torch-geometric-2.6.0
pytorch-2.4.0
Python-3.11
FreeBSD 14.1 | 0easy
|
Title: Add convenience methods on `ParameterSet`
Body: It could be useful in hooks, e.g. to check whether `APIOperation` contains some header (we leave case-insensitive for a while).
1. Add a new `contains` method to the [ParameterSet](https://github.com/schemathesis/schemathesis/blob/master/src/schemathesis/parameters.py#L55) class, the method should accept `name` of type `str`
2. The implementation should reuse the `get` method and check whether its result is `None`
3. Inside `test/specs/openapi/parameters/test_simple_payloads.py` add a new function `test_parameter_set_get`
4. This test function could have a setup like this:
```python
import schemathesis
# ... other tests omitted for brevity
def test_parameter_set_get(make_openapi_3_schema):
header = {"in": "header", "name": "id", "required": True, "schema": {}}
raw_schema = make_openapi_3_schema(parameters=[header])
schema = schemathesis.from_dict(raw_schema)
```
5. Use the new method from step 1 on `schema["/users"]["POST"].headers` inside this test and verify that it contains the header with the name `id` and does not contain the header with the name `unknown` | 0easy
|
Title: Can you add additional liquidation feed?
Body: Looking at standards.py, it looks like Bitmex is the only liquidation feed?
```
LIQUIDATIONS: {
BITMEX: 'liquidation'
}
```
There is another project that I've been watching (that is also pretty awesome IMHO), [SignificantTrades](https://github.com/Tucsky/SignificantTrades).. I'm particularly interested in Binance (in this case Binance Futures) at this time, because that is where I have funds. So...I guess that is why I am aware of this.
The project here:
> line 82 of:
> https://github.com/Tucsky/SignificantTrades/blob/master/src/exchanges/binance-futures.js
The stream is 'forceOrder'. Documentation here:
> https://binance-docs.github.io/apidocs/futures/en/#liquidation-order-streams
Looking at the docs, it looks like they also have a feed for all liquidations regardless of the pair. I kind of would be curious to collect and look at this and see what is going on.. Don't know if it would 'fit in' w/how things are setup now though? | 0easy
|
Title: JS slugify (with unicode enabled) leaves invalid characters in the slug
Body: ### Issue Summary
The JS `slugify` function (used when cleaning a manually-entered slug) fails to strip out certain spacer/combining characters that are disallowed by Django slug validation.
### Steps to Reproduce
1. On a site with `WAGTAIL_ALLOW_UNICODE_SLUGS = True`, create a new page
2. Paste the text "উইকিপিডিয়ায় স্বাগতম!" into the slug field
3. Defocus the field; it will be rewritten to "উইকিপিডিয়ায়-স্বাগতম"
4. Attempt to save the page; this returns the error "Enter a valid “slug” consisting of Unicode letters, numbers, underscores, or hyphens."
The expected behaviour is that the browser-side slug cleanup step should result in a slug that is accepted by the Django validation. However, the output contains the character [U+09BF](https://www.fileformat.info/info/unicode/char/09bf/index.htm) which is disallowed, being a spacing combining character.
Note that the bug only arises for slugs entered manually into the slug field; pasting "উইকিপিডিয়ায় স্বাগতম!" into the _title_ field results in the slug "উইকপডযয-সবগতম" which is valid. This is because the title-to-slug conversion uses a more complex function which also performs transliteration when unicode slugs are disabled (which wouldn't be appropriate for manually-entered slugs); however, the simplified function used for cleaning manually-entered slugs is _overly_ simple:
https://github.com/wagtail/wagtail/blob/84b3bf70349171e44f283085f1d89ccd9deed40e/client/src/utils/slugify.ts#L10-L13
since it's only stripping out a designated short list of punctuation characters, where it would be better to strip out everything that is _not_ classed as a letter/number or explicitly allowed punctuation (just hyphen and underscore I think). Would be useful to compare this against the behaviour of the more advanced title-to-slug conversion, and Django's `django.utils.text.slugify`.
- I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: yes
### Technical details
- Python version: 3.12.4.
- Django version: 5.1.1
- Wagtail version: 6.3a0
- Browser version: Chrome 128
### Working on this
<!--
Do you have thoughts on skills needed?
Are you keen to work on this yourself once the issue has been accepted?
Please let us know here.
-->
Anyone can contribute to this. View our [contributing guidelines](https://docs.wagtail.org/en/latest/contributing/index.html), add a comment to the issue once you’re ready to start.
| 0easy
|
Title: Debug log on terminal
Body: Hi,
when using Halo with boto3, I have a strange behaviour, all the debug log from boto are displayed

MacOS/Python3.6
halo (0.0.5)
boto3 (1.4.4)
Thanks | 0easy
|
Title: Document and automate conda forge build
Body: | 0easy
|
Title: Improve the `ImageSequential` docs
Body: now that i see this one, i think it's worth somewhere to update the docs properly explaining when to use AuggmentationSequential vs ImageSequential
_Originally posted by @edgarriba in https://github.com/kornia/kornia/pull/2799#discussion_r1488008322_
worth discuss the advantages, examples, etc for each API
| 0easy
|
Title: [Feature Request] Add `outputs` and `outputs_list` to `window.dash_clientside.callback_context`
Body: For dash's client-side callbacks, adding `outputs` and `outputs_list` for `window.dash_clientside.callback_context` will improve operational freedom in many scenarios. Currently, only the following information can be obtained in the client-side callbacks:

```python
import dash
from dash import html
from dash.dependencies import Input, Output
app = dash.Dash(__name__)
app.layout = html.Div(
[html.Button("trigger", id="trigger-demo"), html.Pre(id="output-demo")],
style={"padding": 50},
)
app.clientside_callback(
"""(n_clicks) => {
return JSON.stringify(Object.keys(window.dash_clientside.callback_context));
}""",
Output("output-demo", "children"),
Input("trigger-demo", "n_clicks"),
prevent_initial_call=True,
)
if __name__ == '__main__':
app.run(debug=True)
``` | 0easy
|
Title: Current upload does not support inclusion of mime-type
Body: Our current upload/update methods do not include the mime-type. As such, when we upload photos to storage and download them again they don't render properly.
The current fix was proposed by John on the discord channel. We should integrate it in so that Users can download/use photos.
```
multipart_data = MultipartEncoder(
fields={
"file": (
"party-parrot.gif",
open("./out/party-parrot.gif", 'rb'),
"image/gif"
)
})
formHeaders = {
"Content-Type": multipart_data.content_type,
}
headers = dict(supabase._get_auth_headers(), **formHeaders)
response = requests.post(
url=request_url,
headers=headers,
data=multipart_data,
)
``` | 0easy
|
Title: referenced deps from an environment defined by an optional combinator are not pulled in
Body: ## Issue
<!-- Describe what's the expected behaviour and what you're observing. -->
Suppose we have two environments defined by the below:
```
[testenv:lint{,-ci}]
deps =
flake8
flake8-print
flake8-black
ci: flake8-junit-report
commands =
!ci: flake8
ci: flake8 --output-file flake8.txt --exit-zero
ci: flake8_junit flake8.txt flake8_junit.xml
```
When we run these environments explicitly, the dependencies of each is appropriately handled -- i.e.
`tox r -e lint` --> installs flake8, flake8-print, flake8-black
`tox r -e lint-ci` --> installs flake8, flake8-print, flake8-black, flake8-junit-report
However, when we refer to these dependencies in another environment, such as with:
```
[testenv:safety]
deps =
{[tox]requires}
{[testenv]deps}
{[testenv:lint-ci]deps}
safety
commands =
safety check
```
Only the deps not prefixed with `ci` are pulled into the safety env when ran. This is regardless of how we refer to it -- `{[testenv:lint]deps}` also does not pull in the `flake8-junit-report` dependency.
Workaround seems to be to additionally define the optional combinator in the calling environment, i.e. define `[testenv:safety{,-ci}]` , then when we run `tox r -e safety-ci` -- it will pull in the `ci` prefix definitions in the referenced dependencies (but conversely, those defined with `!ci` would not be pulled in, even if we use the reference `{[testenv:lint]deps}`).
This workaround is undesirable as there is intended to only be one such "safety" environment, and expectation is that by providing the correct environment name in the reference (`{[testenv:lint-ci]deps}`) that it will pull the resources defined by the reference and not by those of the calling test env.
## Environment
Provide at least:
- OS: Ubuntu18, 20
- tox: 3.21, 3.28, 4.6.2
<details open>
<summary>Output of <code>pip list</code> of the host Python, where <code>tox</code> is installed</summary>
```console
(safety) ubuntu:~/GitHub/NewTemp/CommonPy$ pip list
Package Version
------------------ ---------
aiohttp 3.8.4
aiosignal 1.3.1
async-timeout 4.0.2
attrs 23.1.0
bandit 1.7.5
black 22.12.0
boto3 1.20.11
botocore 1.23.11
certifi 2023.5.7
cffi 1.15.1
charset-normalizer 3.1.0
click 8.1.3
cryptography 41.0.1
distlib 0.3.6
docker 4.4.4
dparse 0.6.2
exceptiongroup 1.1.1
filelock 3.12.2
flake8 6.0.0
flake8-black 0.3.6
flake8-isort 6.0.0
flake8-print 5.0.0
frozenlist 1.3.3
gitdb 4.0.10
GitPython 3.1.31
idna 3.4
iniconfig 2.0.0
isort 5.12.0
Jinja2 3.1.2
jinja2-cli 0.8.2
jmespath 0.10.0
markdown-it-py 3.0.0
MarkupSafe 2.1.3
marshmallow 3.14.1
mccabe 0.7.0
mdurl 0.1.2
moto 2.2.16
multidict 6.0.4
mypy-extensions 1.0.0
packaging 21.3
pathspec 0.11.1
pbr 5.11.1
pip 22.0.4
pipdeptree 2.9.3
platformdirs 3.6.0
pluggy 1.0.0
psycopg2-binary 2.9.2
py 1.11.0
pycodestyle 2.10.0
pycparser 2.21
pyflakes 3.0.1
Pygments 2.15.1
PyMySQL 1.0.2
pyparsing 3.1.0
pytest 7.3.2
python-dateutil 2.8.2
pytz 2023.3
PyYAML 5.4.1
requests 2.31.0
responses 0.23.1
rich 13.4.2
ruamel.yaml 0.17.32
ruamel.yaml.clib 0.2.7
s3transfer 0.5.2
safety 2.3.5
setuptools 68.0.0
six 1.16.0
slackclient 2.9.3
smmap 5.0.0
sort-requirements 1.3.0
stevedore 5.1.0
toml 0.10.2
tomli 2.0.1
tox 3.28.0
tox-docker 1.7.0
tox-venv 0.4.0
types-PyYAML 6.0.12.10
typing_extensions 4.6.3
urllib3 1.26.16
virtualenv 20.23.1
websocket-client 1.6.0
Werkzeug 2.3.6
wheel 0.40.0
xmltodict 0.13.0
yarl 1.9.2
```
</details>
## Output of running tox
<details open>
<summary>Output of <code>tox -rvv</code></summary>
```console
ubuntu:~/GitHub/NewTemp/CommonPy$ tox r -vve safety
using tox.ini: /home/ubuntu/GitHub/NewTemp/CommonPy/tox.ini (pid 27634)
removing /home/ubuntu/GitHub/NewTemp/CommonPy/.tox/log
python3.8 (/usr/bin/python3.8) is {'executable': '/usr/bin/python3.8', 'implementation': 'CPython', 'version_info': [3, 8, 16, 'final', 0], 'version': '3.8.16 (default, Dec 7 2022, 01:12:13) \n[GCC 7.5.0]', 'is_64': True, 'sysplatform': 'linux', 'os_sep': '/', 'extra_version_info': None}
safety uses /usr/bin/python3.8
unit uses /usr/bin/python3.8
coverage uses /usr/bin/python3.8
build uses /usr/bin/python3.8
dev uses /usr/bin/python3.8
release uses /usr/bin/python3.8
using tox-3.28.0 from /home/ubuntu/.local/lib/python3.6/site-packages/tox/__init__.py (pid 27634)
skipping sdist step
safety start: getenv /home/ubuntu/GitHub/NewTemp/CommonPy/.tox/safety
safety reusing: /home/ubuntu/GitHub/NewTemp/CommonPy/.tox/safety
safety finish: getenv /home/ubuntu/GitHub/NewTemp/CommonPy/.tox/safety after 0.03 seconds
safety start: finishvenv
safety finish: finishvenv after 0.01 seconds
safety start: envreport
setting PATH=/home/ubuntu/GitHub/NewTemp/CommonPy/.tox/safety/bin:/home/ubuntu/GitHub/.vscode-server/bin/252e5463d60e63238250799aef7375787f68b4ee/bin/remote-cli:/home/ubuntu/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
[27757] /home/ubuntu/GitHub/NewTemp/CommonPy$ /home/ubuntu/GitHub/NewTemp/CommonPy/.tox/safety/bin/python -m pip freeze >.tox/safety/log/safety-3.log
safety finish: envreport after 0.42 seconds
safety installed: bandit==1.7.5,black==22.12.0,commonpy==3.2.2,certifi==2023.5.7,charset-normalizer==3.1.0,click==8.1.3,distlib==0.3.6,docker==4.4.4,dparse==0.6.2,exceptiongroup==1.1.1,filelock==3.12.2,flake8==6.0.0,flake8-black==0.3.6,flake8-isort==6.0.0,flake8-print==5.0.0,gitdb==4.0.10,GitPython==3.1.31,idna==3.4,iniconfig==2.0.0,isort==5.12.0,Jinja2==3.1.2,jinja2-cli==0.8.2,markdown-it-py==3.0.0,MarkupSafe==2.1.3,mccabe==0.7.0,mdurl==0.1.2,mypy-extensions==1.0.0,packaging==21.3,pathspec==0.11.1,pbr==5.11.1,pipdeptree==2.9.3,platformdirs==3.6.0,pluggy==1.0.0,py==1.11.0,pycodestyle==2.10.0,pyflakes==3.0.1,Pygments==2.15.1,pyparsing==3.1.0,pytest==7.3.2,PyYAML==6.0,requests==2.31.0,rich==13.4.2,ruamel.yaml==0.17.32,ruamel.yaml.clib==0.2.7,safety==2.3.5,six==1.16.0,smmap==5.0.0,sort-requirements==1.3.0,stevedore==5.1.0,toml==0.10.2,tomli==2.0.1,tox==3.28.0,tox-docker==1.7.0,tox-venv==0.4.0,typing_extensions==4.6.3,urllib3==1.26.16,virtualenv==20.23.1,websocket-client==1.6.0
removing /home/ubuntu/GitHub/NewTemp/CommonPy/.tox/safety/tmp
safety start: run-test-pre
safety run-test-pre: PYTHONHASHSEED='2925510828'
safety run-test-pre: commands[0] | /home/ubuntu/GitHub/NewTemp/CommonPy/.tox/safety/bin/python -m pip install -r requirements.txt
setting PATH=/home/ubuntu/GitHub/NewTemp/CommonPy/.tox/safety/bin:/home/ubuntu/GitHub/.vscode-server/bin/252e5463d60e63238250799aef7375787f68b4ee/bin/remote-cli:/home/ubuntu/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
[27759] /home/ubuntu/GitHub/NewTemp/CommonPy$ /home/ubuntu/GitHub/NewTemp/CommonPy/.tox/safety/bin/python -m pip install -r requirements.txt
...
Installing collected packages: types-PyYAML, pytz, xmltodict, werkzeug, PyYAML, python-dateutil, PyMySQL, pycparser, psycopg2-binary, multidict, marshmallow, jmespath, frozenlist, attrs, async-timeout, yarl, responses, cffi, botocore, aiosignal, s3transfer, cryptography, aiohttp, slackclient, boto3, moto
Attempting uninstall: PyYAML
Found existing installation: PyYAML 6.0
Uninstalling PyYAML-6.0:
Successfully uninstalled PyYAML-6.0
Successfully installed PyMySQL-1.0.2 PyYAML-5.4.1 aiohttp-3.8.4 aiosignal-1.3.1 async-timeout-4.0.2 attrs-23.1.0 boto3-1.20.11 botocore-1.23.11 cffi-1.15.1 cryptography-41.0.1 frozenlist-1.3.3 jmespath-0.10.0 marshmallow-3.14.1 moto-2.2.16 multidict-6.0.4 psycopg2-binary-2.9.2 pycparser-2.21 python-dateutil-2.8.2 pytz-2023.3 responses-0.23.1 s3transfer-0.5.2 slackclient-2.9.3 types-PyYAML-6.0.12.10 werkzeug-2.3.6 xmltodict-0.13.0 yarl-1.9.2
WARNING: You are using pip version 22.0.4; however, version 23.1.2 is available.
You should consider upgrading via the '/home/ubuntu/GitHub/NewTemp/CommonPy/.tox/safety/bin/python -m pip install --upgrade pip' command.
safety finish: run-test-pre after 21.28 seconds
safety start: run-test
safety run-test: commands[0]....
```
</details>
## Minimal example
<!-- If possible, provide a minimal reproducer for the issue. -->
As defined above along with some others in the tox "requires" section (shouldn't be impactful, but if needed can provide full tox.ini), we can see `flake8-junit-report` is not listed to be installed:
```console
tox -e safety
safety create: /home/ubuntu/GitHub/NewTemp/CommonPy/.tox/safety
safety installdeps: urllib3<2, tox>=3.21.0,<4, tox-venv, tox-docker<2, setuptools>=65.5.1, wheel, pip>=21.3.1, flake8, flake8-print, flake8-black, flake8, flake8-isort, sort-requirements, bandit, setuptools>=65.5.1, wheel, pip>=21.3.1, pytest, jinja2-cli[yaml], pipdeptree, safety
```
| 0easy
|
Title: Install from requirements.txt
Body: This is an awesome project and I appreciate your work!
When installing from `pip install -r requirements.txt` the install will fail because in setup.py for django-plotly-dash has the line `import django_plotly_dash as dpd` which requires dash, django, etc. to work.
See point 6. https://packaging.python.org/guides/single-sourcing-package-version/
> Although this technique is common, beware that it will fail if sample/\_\_init\_\_.py imports packages from install_requires dependencies, which will very likely not be installed yet when setup.py is run.
I was able to get it working by removing django-plotly-dash from the requirements.txt file and installing it separately after everything else. I don't know if you like any other methods proposed for version number, but I thought you would like to know about the issue.
Thank you again for your work!
| 0easy
|
Title: CI: check docs for newly added source code files
Body: - [ ] Add CI check that documentation index pages on docs.cleanlab.ai include new source code files which have been added in a new commit. Otherwise somebody may push commit with new source code files, but the documentation for them will never appear on docs.cleanlab.ai.
Ideally we also need to edit less docs/ files and index files to make documentation for new source code files appear in docs.cleanlab.ai. Solutions to minimize number of files that need to be touched are welcomed!
- [ ] Add reminder to CONTRIBUTING.md that lists steps needed to ensure new source code files will have their documentation appear in docs.cleanlab.ai
Currently new source code files must be listed in various places like:
- The appropriate index files in here: https://github.com/cleanlab/cleanlab/tree/master/docs/source
- A module docs page in: https://github.com/cleanlab/cleanlab/tree/master/docs/source/cleanlab
- The __init__ file: https://github.com/cleanlab/cleanlab/blob/master/cleanlab/__init__.py | 0easy
|
Title: Ability to choose format of datetimes in list/table view
Body: ### Checklist
- [X] There are no similar issues or pull requests for this yet.
### Is your feature related to a problem? Please describe.
Currently the values for datetime columns seem to be rendered as ISO-8601 with millisecond precision which is unnecessary (eg `2022-04-04 22:43:30.027151`)
### Describe the solution you would like.
It'd be great if we had a way to pass some sort of custom formatter, ideally a callable that takes the DB value and returns the value that should be passed to the renderer.
### Describe alternatives you considered
_No response_
### Additional context
_No response_ | 0easy
|
Title: Issues New metric API
Body: The canonical definition is here: https://chaoss.community/?p=3634 | 0easy
|
Title: Add count to spider_exceptions stats
Body: <!--
Thanks for taking an interest in Scrapy!
If you have a question that starts with "How to...", please see the Scrapy Community page: https://scrapy.org/community/.
The GitHub issue tracker's purpose is to deal with bug reports and feature requests for the project itself.
Keep in mind that by filing an issue, you are expected to comply with Scrapy's Code of Conduct, including treating everyone with respect: https://github.com/scrapy/scrapy/blob/master/CODE_OF_CONDUCT.md
The following is a suggested template to structure your pull request, you can find more guidelines at https://doc.scrapy.org/en/latest/contributing.html#writing-patches and https://doc.scrapy.org/en/latest/contributing.html#submitting-patches
-->
## Summary
Today, we only increase the specific exception under `spider_exceptions`, e.g., [`spider_exceptions/AttributeError`](https://github.com/scrapy/scrapy/blob/master/scrapy/core/scraper.py#L250). We should also have a total count like we have for downloader, e.g., [`downloader/exception_count`](https://github.com/scrapy/scrapy/blob/master/scrapy/downloadermiddlewares/stats.py#L80).
## Motivation
This information is useful when creating visualizations of stats.
| 0easy
|
Title: Request for Euler Maruyama features in numpyro
Body: Dear Numpyro developers,
Please develop Euler Maruyama features in numpyro similar to features found in PyMC.
Thanks alot. | 0easy
|
Title: Provide a manylinux and macos bdist_wheel
Body: Modify the setup.py so it produce bdist_wheel for manylinux including the libgraphqlparser.so/a
Modify the build process so this wheel is uploaded to pypi
Modify the cffi part of tartiflette/parser so it loads this lib instead of the local one.
| 0easy
|
Title: [BUG] hop chain demo bug
Body: in https://github.com/graphistry/pygraphistry/blob/master/demos/more_examples/graphistry_features/hop_and_chain_graph_pattern_mining.ipynb
line: ` 'to': names[j],`
should be: ` 'to': data[0]['usernameList'][j],`
---
plots should be regenerated etc, or at least that line of code edited and plots preserved | 0easy
|
Title: New Contributors Closing Issues metric API
Body: The canonical definition is here: https://chaoss.community/?p=3615 | 0easy
|
Title: Support for XGBRanker
Body: [XGBRanker](https://xgboost.readthedocs.io/en/latest/python/python_api.html#xgboost.XGBRanker) should be a very simple add; similar to #173
It should hopefully just involve (1) either using or slightly tweaking the converter for XGBRegressor (2) writing tests and making sure they pass | 0easy
|
Title: Add links to examples from the docstrings and user guide
Body: _TLDR: Meta-issue for new contributors to add links to the examples in helpful places of the rest of the docs._
## Description
This meta-issue is a good place to start with your first contributions to scikit-learn.
This issue builds on top of #26927 and is introduced for easier maintainability. The goal is exactly the same as in the old issue.
Here, we improve the documentation by making the [Examples](https://scikit-learn.org/stable/auto_examples/index.html) more discoverable by **adding links to examples in relevant sections of the documentation in the _API documentation_ and in the _User Guide_**:
- the [API documentation](https://scikit-learn.org/stable/api/index.html) is made from the docstrings of public classes and functions which can be found in the `sklearn` folder of the project
- the [User Guide](https://scikit-learn.org/stable/user_guide.html) can be found in the `doc/modules` folder of the project
Together with the [examples](https://scikit-learn.org/stable/auto_examples/index.html) (which are in the `examples` folder of the project), these files get rendered into html when the documentation is build and then are displayed on the [scikit-learn website](https://scikit-learn.org).
**Important: We estimate that only 70% of the examples in this list will ultimately be referenced. This means part of the task is deciding which examples deserve being referenced and we are aware that this is not a trivial decision, especially for new contributors. We encourage you to share your reasoning, and a team member will make the final call. We hope this isn’t too frustrating, but please know that evaluating an example is not just an exercise for new contributors; it’s a meaningful and valuable contribution to the project, even (and especially) if the example you worked on doesn’t end up being linked.**
## Workflow
We recommend this workflow for you:
0. have `pre-commit` installed in your environment as in point 10 of _How to contribute_ in the [development guide](https://scikit-learn.org/dev/developers/contributing.html#contributing-code) (this will re-format your contribution to the standards used in scikit-learn and will spare you a lot of confusion when you are a beginner)
1. pick an example to work on
- Make sure your example of interest had not recently been claimed by someone else by looking through the discussion of this issue (you will have to load hidden items in this discussion). Hint: If somebody has claimed an example several weeks ago and then never started it, you can take it. You can also take over tasks marked as _stalled_.
- search the repo for other links to your example and check if the example is already linked in relevant parts of the docs
- how to search the repo: a) find the file name of your example in the examples folder (it starts with `plot_...`); b) use full text search of your IDE to look for where that name appears
- you can totally ignore the "Gallery examples" on the website, as it is auto-generated; do only look for real links in the repo
- comment on the issue to claim an example (you don't need to wait for a team member's approval before starting to work)
2. find suitable spots in either the _API documentation_ or the _User Guide_ (or both) where users would be happy to find your example linked
- read through your example and understand where it is making its most useful statements
- how to find a good spot (careful: we are extremely picky here)
- if the example demonstrates a certain real world use case: find where in the _User Guide_ the same use case is treated or could be treated
- if the example shows how to use a certain param: the param description in the _API documentation_ might be a good spot to put the link
- if the example compares different techniques: this highly calls for mentioning it in the more theoretical parts of the _User Guide_
- not all the examples listed here need to be referenced: a link to an example on simply how to use some estimator, doesn't add enough value
- if you find an example that doesn't add enough value to be linked: please leave a comment here; this kind of contribution is highly appreciated
3. add links
- An example with the path examples/developing_estimators/sklearn_is_fitted.py whould be referenced like this:
```
:ref:`sphx_glr_auto_examples_developing_estimators_sklearn_is_fitted.py`
```
- see this example PR, that shows how to add a link to the User Guide: #26926
- we aim **not** to use the `.. rubric:: Examples` section to put the example if possible, but to integrate it into the text; be aware that if you add a link like this \:ref:\`title \<link\>\`, you can change its title so that the example's title gets substituted by your picked title and the link can be fitted more nicely to the sentences
- please avoid adding your link to a list of other examples, since we strive to add the links in the most relevant places
- please avoid adding a new `.. rubric:: Examples` section
4. test build the documentation before opening your PR
- have a look into the [Documentation part of the Development Guide](https://scikit-learn.org/dev/developers/contributing.html#building-the-documentation) to learn how to locally build the documentation.
- Check if your changes are displayed as desired by opening the test build in your browser.
5. open PR
- use a PR title like `DOC add links to <name of example>` (starting with DOC)
- do not refer to this issue on the title of the PR, instead:
- do refer to this issue using in the *Reference Issues/PRs* section of your PR, do refer to this issue using "Towards `#30621`" (do **not** use "Closes #..." or "Fixes #...")
6. check the CI
- After the CI tests have finished (~90 minutes) you can find one that says "Check the rendered docs here!". In there, you can look into how the CI has built the documentation for the changed files to check if everything looks alright. You will see something like `auto_examples/path_to_example, [dev], [stable]`, where the first link is your branche's version, the second is the main dev branch and the third link is the last released scikit-learn version that is used for the stable documentation on the website.
- if the CI shows any failure, you should to take action by investigating and proposing solutions; as a rule of thump, you can find the most useful information from the CIs, if you click the upper links first; in any case you need to click through several layers until you see actual test results with more information (and until it looks similar to running pytest, ruff or doctest locally)
- if the CI shows linting issues, check if you have installed and activated `pre-commit` properly, and fix the issue by the action the CI proposes (for instance adding or deleting an empty line)
- if you are lost and don't know what to do with a CI failure, look through other PRs from this issue; most things have already happened to others
- sometimes, http request errors such as 404 or 405 show up in the CI, in which case you should push an empty commit (`git commit --allow-empty -m "empty commit to re-trigger CI"`)
7. wait for reviews and be ready to adjust your contribution later on
## Expectation management for new contributors
How long will your first PR take you up until the point you open a PR?
- 8-16 hours if you have never contributed to any project and have only basic or no understanding of the workflow yet
- 2-8 hours if you know the workflow and are just new to scikit-learn (more to the shorter end if you know what linting is and a bit of sphinx)
- 1-2 hours for your 2nd, 3rd, ... PR on the same issue for everyone
How long will it take us to merge your PR?
- we strive for a scikit-learn member to look at your PR within a few days and suggest changes depending on technical quality of the PR and an assessment of added value to the user
- we strive for a maintainer to evaluate your PR within a few weeks; they might also suggest changes before approving and merging
- the whole process on average takes several weeks and can take up months, depending of availability of maintainers and on how many review cycles are necessary
## ToDo
Here's a list of all the remaining examples:
- examples/applications:
- [x] plot_model_complexity_influence.py # no references need to be added: #30814
- [ ] plot_out_of_core_classification.py #30462 (stalled)
- [ ] plot_prediction_latency.py #30462 (stalled)
- [ ] plot_topics_extraction_with_nmf_lda.py
- examples/bicluster:
- [ ] plot_bicluster_newsgroups.py
- [ ] plot_spectral_coclustering.py #29606 (stalled)
- examples/calibration:
- [ ] plot_compare_calibration.py
- examples/classification:
- [ ] plot_classifier_comparison.py
- [ ] plot_digits_classification.py
- examples/cluster:
- [x] plot_agglomerative_clustering_metrics.py #30867
- [x] plot_cluster_comparison.py #30127
- [ ] plot_coin_ward_segmentation.py #30916
- [x] plot_dict_face_patches.py [no references need to be added](https://github.com/scikit-learn/scikit-learn/issues/30621#issuecomment-2716167959)
- [ ] plot_digits_agglomeration.py #30979
- [ ] plot_digits_linkage.py
- [ ] plot_face_compress.py
- [x] plot_inductive_clustering.py #30182
- [ ] plot_segmentation_toy.py #30978
- [ ] plot_ward_structured_vs_unstructured.py #30861
- examples/covariance:
- [ ] plot_mahalanobis_distances.py
- [ ] plot_robust_vs_empirical_covariance.py
- [ ] plot_sparse_cov.py
- examples/decomposition:
- [x] plot_ica_blind_source_separation.py # [no references need to be added](https://github.com/scikit-learn/scikit-learn/issues/30621#issuecomment-2649370018): https://github.com/scikit-learn/scikit-learn/pull/30786
- [x] plot_ica_vs_pca.py # [no references need to be added](https://github.com/scikit-learn/scikit-learn/issues/30621#issuecomment-2649370018): https://github.com/scikit-learn/scikit-learn/pull/30786
- [ ] plot_image_denoising.py #30864
- [ ] plot_sparse_coding.py
- [ ] plot_varimax_fa.py
- examples/ensemble:
- [ ] plot_bias_variance.py #30845
- [ ] plot_ensemble_oob.py
- [ ] plot_feature_transformation.py
- [ ] plot_forest_hist_grad_boosting_comparison.py
- [ ] plot_forest_importances_faces.py
- [x] plot_forest_importances.py [no references need to be added](https://github.com/scikit-learn/scikit-learn/issues/30621#issuecomment-2731163071)
- [x] plot_forest_iris.py # [no references need to be added](https://github.com/scikit-learn/scikit-learn/issues/30621#issuecomment-2676356956)
- [x] plot_gradient_boosting_categorical.py #30749
- [x] plot_gradient_boosting_oob.py #30749
- [x] plot_gradient_boosting_regularization.py #30749
- [ ] plot_monotonic_constraints.py
- [ ] plot_random_forest_regression_multioutput.py
- [x] plot_stack_predictors.py #30747
- [x] plot_voting_decision_regions.py [no references need to be added](https://github.com/scikit-learn/scikit-learn/pull/30847#discussion_r1963601795) #30847
- [x] plot_voting_probas.py #30847
- examples/feature_selection:
- [x] plot_feature_selection.py [no references need to be added](https://github.com/scikit-learn/scikit-learn/pull/31000#issuecomment-2728836616) #31000
- [x] plot_f_test_vs_mi.py [no references need to be added](https://github.com/scikit-learn/scikit-learn/issues/30621#issuecomment-2734809734)
- [ ] plot_rfe_with_cross_validation.py
- [ ] plot_select_from_model_diabetes.py
- examples/gaussian_process:
- [ ] plot_gpc_iris.py #30605
- [ ] plot_gpc_isoprobability.py #30605
- [ ] plot_gpc.py #30605
- [ ] plot_gpc_xor.py #30605
- [ ] plot_gpr_co2.py
- [ ] plot_gpr_noisy.py
- [x] plot_gpr_noisy_targets.py #30850
- [ ] plot_gpr_on_structured_data.py
- [ ] plot_gpr_prior_posterior.py
- examples/inspection:
- [x] plot_causal_interpretation.py #30752
- [ ] plot_linear_model_coefficient_interpretation.py
- [ ] plot_permutation_importance_multicollinear.py
- [ ] plot_permutation_importance.py
- examples/linear_model:
- [ ] plot_ard.py
- [ ] plot_huber_vs_ridge.py
- [ ] plot_iris_logistic.py
- [ ] plot_lasso_and_elasticnet.py #30587
- [ ] plot_lasso_coordinate_descent_path.py
- [ ] plot_lasso_dense_vs_sparse_data.py
- [ ] plot_lasso_lars_ic.py
- [ ] plot_lasso_lars.py
- [ ] plot_lasso_model_selection.py
- [ ] plot_logistic_l1_l2_sparsity.py
- [ ] plot_logistic_multinomial.py
- [ ] plot_logistic_path.py
- [ ] plot_logistic.py #30942
- [ ] plot_multi_task_lasso_support.py
- [ ] plot_nnls.py
- [ ] plot_ols_3d.py
- [x] plot_ols.py # [no references need to be added](https://github.com/scikit-learn/scikit-learn/issues/30621#issuecomment-2600872584)
- [x] plot_ols_ridge_variance.py #30683
- [ ] plot_omp.py
- [ ] plot_poisson_regression_non_normal_loss.py
- [ ] plot_polynomial_interpolation.py
- [ ] plot_quantile_regression.py
- [ ] plot_ridge_coeffs.py
- [ ] plot_ridge_path.py
- [ ] plot_robust_fit.py
- [ ] plot_sgd_comparison.py
- [ ] plot_sgd_iris.py
- [ ] plot_sgd_separating_hyperplane.py
- [ ] plot_sgd_weighted_samples.py
- [ ] plot_sparse_logistic_regression_20newsgroups.py
- [ ] plot_sparse_logistic_regression_mnist.py
- [ ] plot_theilsen.py
- [ ] plot_tweedie_regression_insurance_claims.py
- examples/manifold:
- [ ] plot_lle_digits.py
- [ ] plot_manifold_sphere.py #30959
- [ ] plot_swissroll.py
- [ ] plot_t_sne_perplexity.py
- examples/miscellaneous:
- [ ] plot_anomaly_comparison.py
- [ ] plot_display_object_visualization.py
- [ ] plot_estimator_representation.py
- [ ] plot_johnson_lindenstrauss_bound.py
- [ ] plot_kernel_approximation.py
- [ ] plot_metadata_routing.py
- [ ] plot_multilabel.py
- [x] plot_multioutput_face_completion.py # [no references need to be added](https://github.com/scikit-learn/scikit-learn/issues/30621#issuecomment-2676356956)
- [ ] plot_outlier_detection_bench.py
- [ ] plot_partial_dependence_visualization_api.py
- [ ] plot_pipeline_display.py
- [ ] plot_roc_curve_visualization_api.py
- [ ] plot_set_output.py
- examples/mixture:
- [ ] plot_concentration_prior.py
- [ ] plot_gmm_covariances.py
- [ ] plot_gmm_init.py
- [ ] plot_gmm_pdf.py
- [x] plot_gmm.py # [no references need to be added](https://github.com/scikit-learn/scikit-learn/pull/30841#issue-2855807102): #30841
- [x] plot_gmm_selection.py #30841
- [x] plot_gmm_sin.py # [no references need to be added](https://github.com/scikit-learn/scikit-learn/pull/30841#issue-2855807102): #30841
- examples/model_selection:
- [ ] plot_confusion_matrix.py #30949
- [ ] plot_cv_predict.py
- [x] plot_det.py [no references need to be added](https://github.com/scikit-learn/scikit-learn/pull/30977#pullrequestreview-2684987302)
- [ ] plot_grid_search_digits.py
- [ ] plot_grid_search_refit_callable.py
- [ ] plot_grid_search_stats.py #30965
- [ ] plot_grid_search_text_feature_extraction.py #30974
- [ ] plot_likelihood_ratios.py
- [ ] plot_multi_metric_evaluation.py
- [ ] plot_permutation_tests_for_classification.py
- [x] plot_precision_recall.py # [no reference needs to be added](https://github.com/scikit-learn/scikit-learn/issues/30621#issuecomment-2669520889)
- [ ] plot_randomized_search.py
- [ ] plot_roc_crossval.py
- [ ] plot_roc.py
- [ ] plot_successive_halving_heatmap.py
- [ ] plot_successive_halving_iterations.py
- [ ] plot_train_error_vs_test_error.py
- [x] plot_underfitting_overfitting.py [no references need to be added](https://github.com/scikit-learn/scikit-learn/issues/30621#issuecomment-2681734179)
- [x] <strike>plot_validation_curve.py</strike> #had been merged with another example in #29936
- examples/neighbors:
- [ ] plot_digits_kde_sampling.py
- [ ] plot_kde_1d.py
- [ ] plot_lof_novelty_detection.py
- [ ] plot_lof_outlier_detection.py
- [x] plot_nca_classification.py [no references need to be added](https://github.com/scikit-learn/scikit-learn/pull/30849#issuecomment-2665171341) #30849
- [x] plot_nca_dim_reduction.py [no references need to be added](https://github.com/scikit-learn/scikit-learn/pull/30849#issuecomment-2665171341) #30849
- [x] plot_nca_illustration.py [no references need to be added](https://github.com/scikit-learn/scikit-learn/pull/30849#issuecomment-2665171341) #30849
- [ ] plot_species_kde.py
- examples/semi_supervised:
- [x] plot_label_propagation_digits_active_learning.py [no references need to be added](https://github.com/scikit-learn/scikit-learn/pull/30553#issuecomment-2582852356) #30553
- [x] plot_label_propagation_digits.py [no references need to be added](https://github.com/scikit-learn/scikit-learn/pull/30553#issuecomment-2582852356) #30553
- [x] plot_label_propagation_structure.py [no references need to be added](https://github.com/scikit-learn/scikit-learn/pull/30553#issuecomment-2582852356) #30553
- [ ] plot_self_training_varying_threshold.py
- [ ] plot_semi_supervised_newsgroups.py #30882
- [ ] plot_semi_supervised_versus_svm_iris.py
- examples/svm:
- [ ] plot_custom_kernel.py
- [ ] plot_iris_svc.py
- [ ] plot_linearsvc_support_vectors.py
- [ ] plot_oneclass.py
- [ ] plot_rbf_parameters.py
- [ ] plot_separating_hyperplane.py
- [ ] plot_separating_hyperplane_unbalanced.py
- [ ] plot_svm_anova.py
- [ ] plot_svm_margin.py #26969 (stalled) #30975 ([maybe remove the example](https://github.com/scikit-learn/scikit-learn/pull/30975#pullrequestreview-2684941292))
- [ ] plot_weighted_samples.py #30676
- examples/tree:
- [x] plot_iris_dtc.py [no references need to be added](https://github.com/scikit-learn/scikit-learn/pull/30650#issuecomment-2653822241) #30650
- <strike>[x] plot_tree_regression_multioutput.py </strike> # was merged with another example in #26962
- [x] plot_unveil_tree_structure.py # [no references need to be added](https://github.com/scikit-learn/scikit-learn/issues/30621#issuecomment-2626465696)
## What comes next?
- after working a bit here, you might want to further explore contributing to scikit learn
- we have #22827 and #25024 that are both also suitable for beginners, but might move forwards a little slower than here
- we are looking for people who are willing to do some intense work to improve or merge some examples; these will be PRs that will be intensely discussed and thoroughly reviewed and will probably take several months; if this sounds good to you, please open an issue with a suggestion and maintainers will evaluate your idea
- this could look like #29963 and #29962
- we also have an open issue to discuss examples that can be removed: #27151
- if you are more senior professionally, you can look through the issues with the [`help wanted`](https://github.com/scikit-learn/scikit-learn/issues?q=is%3Aissue%20state%3Aopen%20label%3A%22help%20wanted%22) label or with the [`moderate`](https://github.com/scikit-learn/scikit-learn/labels/Moderate) label or you can take over [stalled PRs](https://github.com/scikit-learn/scikit-learn/issues?q=is%3Apr%20state%3Aopen%20label%3AStalled); these kind of contributions need to be discussed with maintainers and I would recommend seeking their approval first and not invest too much work before you get a go | 0easy
|
Title: Try Azure Pipelines for greater test speed
Body: We've found that Azure Pipelines is much faster on tests than Travis for CPython. NumFOCUS projects are currently free on Azure Pipelines. This may help with tests esp. on Windows. | 0easy
|
Title: Do not show hidden directories in PY_TEMPLATE_DIR
Body: For example, `.git` is shown in the docker-compose setup from #14 | 0easy
|
Title: Raise test coverage above 90% for giotto/diagrams/_metrics.py
Body: Current test coverage from pytest is 79% | 0easy
|
Title: Marketplace - Change this header so it's the same style as "Featured Agents"
Body: ### Describe your issue.
Change it to the "large-Poppins" style.. as per this typography sheet. [https://www.figma.com/design/aw299myQfhiXPa4nWkXXOT/agpt-template?node-id=7-47&t=axoLiZIIUXifeRWU-1](url)
Style name: large-poppins
font: poppins
size: 18px
line-height: 28px

| 0easy
|
Title: Docs should demonstrate returning early in a responder
Body: New Falcon users may not realize they can simply `return` anywhere in a responder. This is useful for complicated nested logic paths. We should make sure examples of this are sprinkled throughout the docs in key places. | 0easy
|
Title: [Docs] Remove redundant provision of `id` in docs examples
Body: We still have some examples where an `id` is provided to a component even though it is not required.
1. Look through the code examples in our docs e.g. `vizro-core/docs` and `vizro-ai/docs`
2. Remove the `id` from `vm.Graph`, `vm.Table`, `vm.AgGrid` or `vm.Card` if **it is not required**
#### When is it not required?
The `id` is normally not required if that component is not the target of any kind of action e.g. filter_interaction, export, filters or parameters. A good rule of thumb is, if the `id` appears only once in the entire app configuration, it's probably not required.
**Example of a redundant `id` provision** (and the first example where you can remove it from the docs):
In the first example the `id="scatter_chart"` is not required, because the Graph is not being targeted by any action. Also the `id` only appears once in the entire app configuration. In the second example it is required though, because it is now the target of the Filter.
```
from vizro import Vizro
import vizro.plotly.express as px
import vizro.models as vm
iris = px.data.iris()
page = vm.Page(
title="My first page",
components=[
vm.Graph(id="scatter_chart", figure=px.scatter(iris, x="sepal_length", y="petal_width", color="species")),
],
)
dashboard = vm.Dashboard(pages=[page])
Vizro().build(dashboard).run()
```
**Example where the `id` is required:**
```
from vizro import Vizro
import vizro.plotly.express as px
import vizro.models as vm
iris = px.data.iris()
page = vm.Page(
title="My first page",
components=[
vm.Graph(id="scatter_chart", figure=px.scatter(iris, x="sepal_length", y="petal_width", color="species")),
vm.Graph(id="scatter_chart2", figure=px.scatter(iris, x="petal_length", y="sepal_width", color="species")),
],
controls=[
vm.Filter(column="petal_length",targets=["scatter_chart"],selector=vm.RangeSlider(step=1)),
],
)
dashboard = vm.Dashboard(pages=[page])
Vizro().build(dashboard).run()
``` | 0easy
|
Title: CI: send coverage reports to Codecov once when all tests are done
Body: Looks like Codecov is still failing a lot, we send coverage info for every single test, I think it would be best to collect all the coverage and then send it only once.
Not sure if this is possible, but it would be neat 😊 | 0easy
|
Title: ASGI exceptions outside handlers should be able to pass status
Body: An exception raised in the lifecycle (but not handler) of a request on an ASGI-served application should be able to pass a status code.
---
> In my experiment, raising a `BadRequest` in `asgi.py` will cause the ASGI server return 500 error, instead of 400. This seems out of the scope of this PR, but I am wondering whether we should change the raised error to `ServerError` or others before we figure out how to let ASGI server return a 400 error.
_Originally posted by @ChihweiLHBird in https://github.com/sanic-org/sanic/pull/2606#discussion_r1133448078_
| 0easy
|
Title: Module collects files with 'test' in the name, even if they are not test files.
Body: Hi. I was just looking through the codebase to familiarize myself with how it works, and I found this bug- It collects files that have `test` in the name, including something with `test` in the middle of the word:
```
╰─ touch sample/intestine.py
╰─ ls sample
__init__.py intestine.py __pycache__ settings.py urls.py wsgi.py
╰─ pytest --picked
======================================================== test session starts =========================================================
platform linux -- Python 3.6.5, pytest-3.6.0, py-1.5.3, pluggy-0.6.0
rootdir: /home/misterrios/Projects/sample/sample, inifile:
plugins: picked-0.1.0
collecting 0 items
Changed test files... 1. ['sample/intestine.py']
Changed test folders... 1. ['.pytest_cache/']
collected 0 items
==================================================== no tests ran in 0.05 seconds ====================================================
```
Pytest discovers tests by checking for `test_*.py` or `*_test.py`
See: https://docs.pytest.org/en/latest/goodpractices.html#test-discovery
So, a possible solution would be to use the built-in module `pathlib` to check the filenames and the file suffix combined with `startswith` and `endswith`. See: https://docs.python.org/3/library/pathlib.html#pathlib.PurePath.name | 0easy
|
Title: docs: replace f-string in logger usage
Body: Some our documentation examples uses f-strings – eg https://faststream.airt.ai/latest/getting-started/serialization/examples/#__codelineno-11-23
We should replacte them to follow official logging usage recomendations `logger.log("%s", "message")`
It doesn't related to framework sources! Just documentation examples only | 0easy
|
Title: [Feature request] Add apply_to_images to Spatter
Body: | 0easy
|
Title: `sourceMappingURL` triggers access to non-existent file
Body: ### Checklist
- [X] The bug is reproducible against the latest release or `master`.
- [X] There are no similar issues or pull requests to fix it yet.
### Describe the bug
The .js files specify `sourceMappingURL` but the `.map` files are not included in the package. So the browser tries to fetch it and gets a 404 error. We should add the source map files or remove the following lines:
https://github.com/aminalaee/sqladmin/blob/daea8c9c385794605ddeb02b7105d71ca9a9823f/sqladmin/statics/js/bootstrap.min.js#L7
### Steps to reproduce the bug
_No response_
### Expected behavior
_No response_
### Actual behavior
_No response_
### Debugging material
```
INFO: 127.0.0.1:8000 - "GET /js/bootstrap.min.js.map HTTP/1.1" 404 Not Found
```
### Environment
- Ubuntu 20.04.4 LTS
- Python 3.9.12
- SQLAdmin 0.1.11
### Additional context
_No response_ | 0easy
|
Title: Ray/Trendline/Linear Regression of two points
Body: Say I have two points on the Dataframe (price and UNIX timestamp) and I want to draw a Ray (endless) that goes through these two points. Does Pandas TA or Pandas has a function for this? Is there an easy way to draw a Linear Regression ray, I just have the two points and I want to draw a Ray through them and get all the Datapoints (price and timestamp of each) say 1 week into the future to be predicted. Any example of how to use "linear_regression" ? | 0easy
|
Title: Documentation for how to serve a wasm-powered HTML?
Body: ### Documentation is
- [x] Missing
- [ ] Outdated
- [ ] Confusing
- [x] Not sure?
### Explain in Detail
What are the options serving the output of: https://docs.marimo.io/guides/exporting/#export-to-wasm-powered-html
Other than github pages?
Would this be things like apache and nginx?
I don't have any experience with either, so if there was a clear example of how to serve the output_dir folder with either of these, it would be super handy.
### Your Suggestion for Changes
Some extra documentation on how one would ideally serve the output wasm-html from their own computer/device when they want to so that other users can access the notebook (dashboard). While still hiding/privatizing the original code.
(github pages does not allow private repositories, unless you get github enterprise)
Hoping to understand the pros and cons of self-hosting. | 0easy
|
Title: Why keyword arguments for SortedSet commands should be bytes
Body: Argument type for many SortedSet commands (for `min`/`max` kw) is enforced by isinstance check.
In one place it even has this comment:
```python
if not isinstance(max, bytes): # FIXME Why only bytes?
raise TypeError("max argument must be bytes")
```
I think it is more convenient (for me as a user) to pass strings inst. of bytes.
Some of these commands are for "lexicographical" operations (so user mostly works with strings and not blobs of bytes)
Also strings would be more consistent with other commands.
Should I work on a PR for the change?
If there is a deep reason for that - could you please explain the logic behind it?
Thanks.
| 0easy
|
Title: Screenshot Quality argument not working?
Body: The quality argument in coroutine screenshot is not working well
The code i tested with
```
browser = await launch(
headless=True,
executablePath=EXEC_PATH
)
page = await browser.newPage()
await page.goto(link)
await page.screenshot({'fullPage': True, 'path': './FILES/584512526/3726/Python Dictionarieswebshotbot.jpeg', 'type': 'jpeg', 'quality': 1})
```
Unfortunately the output quality is same if i change the quality argv to 100
Below is an example of a screenshot taken with quality 1

**Is there something wrong with my code**
| 0easy
|
Title: Make "events" in the document even clearer
Body: Currently, we are using the term "event" for both Events API data and any incoming payload requests from Slack in the Bolt document. For example,
* There is the "Listening to events" section: https://slack.dev/bolt-python/concepts#event-listening which is referring to the Events API
* Then there is the "Acknowledging events" section: https://slack.dev/bolt-python/concepts#acknowledge which is referring to actions, shortcuts, commands, and options. Also, Bolt acknowledges the "events" for Events API under the hood.
This can be confusing for developers especially when they are not yet familiar with the Slack Platform yet.
To improve this, we can consistently use the term "request" for incoming requests from Slack. Specifically,
* We can rename the section "Acknowledging events" to "Acknowledging requests"
* We can replace all the phrases like "acknowledging events" to "acknowledge requests" in all sections
### The page URLs
* https://slack.dev/bolt-python/concepts#acknowledge and others
## Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/bolt-python/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| 0easy
|
Title: [Feature]: Convert all `os.environ(xxx)` to `monkeypatch.setenv` in test suite
Body: ### 🚀 The feature, motivation and pitch
see title
### Alternatives
_No response_
### Additional context
_No response_
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | 0easy
|
Title: Deprecation warnings: `scipy.sparse.sparsetools` and `np.float`
Body: #### Problem description
Run the test for the new version of [WEFE](https://github.com/raffaem/wefe)
#### Steps/code/corpus to reproduce
```
../../../../../../home/raffaele/.virtualenvs/gensim4/lib/python3.8/site-packages/scipy/sparse/sparsetools.py:21
/home/raffaele/.virtualenvs/gensim4/lib/python3.8/site-packages/scipy/sparse/sparsetools.py:21: DeprecationWarning: `scipy.sparse.sparsetools` is deprecated!
scipy.sparse.sparsetools is a private module for scipy.sparse, and should not be used.
_deprecated()
../../../../../../home/raffaele/.virtualenvs/gensim4/lib/python3.8/site-packages/sklearn/linear_model/_least_angle.py:34
/home/raffaele/.virtualenvs/gensim4/lib/python3.8/site-packages/sklearn/linear_model/_least_angle.py:34: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. Use `float` by itself, which is identical in behavior, to silence this warning. If you specifically wanted the numpy scalar type, use `np.float_` here.
method='lar', copy_X=True, eps=np.finfo(np.float).eps,
../../../../../../home/raffaele/.virtualenvs/gensim4/lib/python3.8/site-packages/sklearn/linear_model/_least_angle.py:164
/home/raffaele/.virtualenvs/gensim4/lib/python3.8/site-packages/sklearn/linear_model/_least_angle.py:164: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. Use `float` by itself, which is identical in behavior, to silence this warning. If you specifically wanted the numpy scalar type, use `np.float_` here.
method='lar', copy_X=True, eps=np.finfo(np.float).eps,
../../../../../../home/raffaele/.virtualenvs/gensim4/lib/python3.8/site-packages/sklearn/linear_model/_least_angle.py:281
/home/raffaele/.virtualenvs/gensim4/lib/python3.8/site-packages/sklearn/linear_model/_least_angle.py:281: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. Use `float` by itself, which is identical in behavior, to silence this warning. If you specifically wanted the numpy scalar type, use `np.float_` here.
eps=np.finfo(np.float).eps, copy_Gram=True, verbose=0,
../../../../../../home/raffaele/.virtualenvs/gensim4/lib/python3.8/site-packages/sklearn/linear_model/_least_angle.py:865
/home/raffaele/.virtualenvs/gensim4/lib/python3.8/site-packages/sklearn/linear_model/_least_angle.py:865: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. Use `float` by itself, which is identical in behavior, to silence this warning. If you specifically wanted the numpy scalar type, use `np.float_` here.
eps=np.finfo(np.float).eps, copy_X=True, fit_path=True,
../../../../../../home/raffaele/.virtualenvs/gensim4/lib/python3.8/site-packages/sklearn/linear_model/_least_angle.py:1121
/home/raffaele/.virtualenvs/gensim4/lib/python3.8/site-packages/sklearn/linear_model/_least_angle.py:1121: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. Use `float` by itself, which is identical in behavior, to silence this warning. If you specifically wanted the numpy scalar type, use `np.float_` here.
eps=np.finfo(np.float).eps, copy_X=True, fit_path=True,
../../../../../../home/raffaele/.virtualenvs/gensim4/lib/python3.8/site-packages/sklearn/linear_model/_least_angle.py:1149
/home/raffaele/.virtualenvs/gensim4/lib/python3.8/site-packages/sklearn/linear_model/_least_angle.py:1149: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. Use `float` by itself, which is identical in behavior, to silence this warning. If you specifically wanted the numpy scalar type, use `np.float_` here.
eps=np.finfo(np.float).eps, positive=False):
../../../../../../home/raffaele/.virtualenvs/gensim4/lib/python3.8/site-packages/sklearn/linear_model/_least_angle.py:1379
/home/raffaele/.virtualenvs/gensim4/lib/python3.8/site-packages/sklearn/linear_model/_least_angle.py:1379: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. Use `float` by itself, which is identical in behavior, to silence this warning. If you specifically wanted the numpy scalar type, use `np.float_` here.
max_n_alphas=1000, n_jobs=None, eps=np.finfo(np.float).eps,
../../../../../../home/raffaele/.virtualenvs/gensim4/lib/python3.8/site-packages/sklearn/linear_model/_least_angle.py:1621
/home/raffaele/.virtualenvs/gensim4/lib/python3.8/site-packages/sklearn/linear_model/_least_angle.py:1621: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. Use `float` by itself, which is identical in behavior, to silence this warning. If you specifically wanted the numpy scalar type, use `np.float_` here.
max_n_alphas=1000, n_jobs=None, eps=np.finfo(np.float).eps,
../../../../../../home/raffaele/.virtualenvs/gensim4/lib/python3.8/site-packages/sklearn/linear_model/_least_angle.py:1755
/home/raffaele/.virtualenvs/gensim4/lib/python3.8/site-packages/sklearn/linear_model/_least_angle.py:1755: DeprecationWarning: `np.float` is a deprecated alias for the builtin `float`. Use `float` by itself, which is identical in behavior, to silence this warning. If you specifically wanted the numpy scalar type, use `np.float_` here.
eps=np.finfo(np.float).eps, copy_X=True, positive=False):
```
#### Versions
Please provide the output of:
```python
Linux-5.8.0-33-generic-x86_64-with-glibc2.32
Python 3.8.6 (default, Sep 25 2020, 09:36:53)
[GCC 10.2.0]
Bits 64
NumPy 1.20.0rc1
SciPy 1.6.0rc1
gensim 4.0.0beta
FAST_VERSION 1
```
| 0easy
|
Title: Enhancement: Missing ConstrainedDate
Body: We apparently are missing the `ConstrainedDate` type, and thus must add it. | 0easy
|
Title: remove validation of url file extensions
Body: Currently, our `Url` types like `ImageUrl`, `VideoUrl` etc are performing validation based on the file extension: only file extensions of that modality are allowed.
However, that is problematic since we cannot catch all edge cases. For example, http://lh6.ggpht.com/-IvRtNLNcG8o/TpFyrudaT6I/AAAAAAAAM6o/_11MuAAKalQ/IMG_3422.JPG?imgmax=800 is a completly valid image url, but currently our validation will fail it, because the extension is not "valid".
We could fix this edge case, but the general problem remains: We are not confident that we can handle all edge cases for all data modalities, and failing validation where it should not can be a serious blocker for users.
So, we decided to remove file extension validation for all or our `Url` types. | 0easy
|
Title: Do not override user settings when running cloud set-key tests
Body: Some of the cloud tests call the set-key command to verify the functionality. However, this causes the user's existing key to be overridden after running the tests.
We should add a `pytest.fixture` that backs up the user settings and it restores them after the test runs. Essentially, we need to backup the file that stores the key.
https://github.com/ploomber/ploomber/blob/9e11fcbcdf763d6f3111e8254dc327a56306971e/tests/cli/test_cloud.py#L125 | 0easy
|
Title: The expires_in function needs to have a timedelta to avoid tokenExpiry errors for milliseconds
Body: **Describe the bug**
I am using the OAuth2session object
```
client = OAuth2Session(client_id=client_id, client_secret=client_secret, token_endpoint=token_url, grant_type='client_credentials')
client.fetch_token(token_url)
client.get(<MY_PROTECTED_URL>)
```
Here, the library behavior is that the token gets automatically refreshed if that has expired. Refer https://github.com/lepture/authlib/blob/master/authlib/oauth2/client.py#L257
However, the function which checks the token expiry https://github.com/lepture/authlib/blob/master/authlib/oauth2/rfc6749/wrappers.py#L13 , simply checks the expiry time with the current time . Because of this we are missing some corner cases, where the token is about to expire in few milliseconds/seconds and when the API call to the protected url is made, it gives error in authentication.
`JWT expired at 2023-06-20T13:16:42Z. Current time: 2023-06-20T13:16:42Z, a difference of 105 milliseconds. Allowed clock skew: 0 milliseconds."
`
**Error Stacks**
`JWT expired at 2023-06-20T13:16:42Z. Current time: 2023-06-20T13:16:42Z, a difference of 105 milliseconds. Allowed clock skew: 0 milliseconds."
`
**To Reproduce**
A minimal example to reproduce the behavior:
While the exact replication is not possible here as the request is failing by few milliseconds.
```
client = OAuth2Session(client_id=<client_id>, client_secret=<client_secret>, token_endpoint=<token_url>, grant_type='client_credentials')
client.fetch_token(<token_ur>l)
client.get(<MY_PROTECTED_URL>)
```
**A clear and concise description of what you expected to happen.**
Even if the token got expired by few milliseconds, the library should be able to handle such cases by obtaining a new token.
Instead of https://github.com/lepture/authlib/blob/master/authlib/oauth2/rfc6749/wrappers.py#L17 , we should be adding a small timedelta . For eg - even if the token is going to expire in next 60 seconds, refresh that still.
**Environment:**
- OS: Linux
- Python Version: 3.6
- Authlib Version: 1.1.0
**Additional context**
There should be some timedelta introduced in the function , so that we can avoid facing issues where API requests fail by few milliseconds. Here, we can add logic to show that token has expired , let's say 30-60 seconds prior to its actual expiry.
| 0easy
|
Title: SentryMiddleware
Body: I've taken a first pass at a [Sentry integration for ASGI](https://github.com/encode/sentry-asgi).
Things you probably want to do to support it well:
* Document `.add_middleware` for adding ASGI middleware. (Perhaps with a section linking to third party ASGI middleware implementations?)
* Ensure the router updates the ASGI scope with an 'endpoint', which should be the routed class or function. | 0easy
|
Title: Add `__repr__()` method to NormalizedDict
Body: I see the problem, that with the RobotDebug REPL i am updating right now, the NormalizedDict are just shown like this:
`${SUITE_METADATA} = <robot.utils.normalizing.NormalizedDict object at 0x1048063e0>`
Could we do it like this?
```python
def __repr__(self):
return '%s(%s)' % (self.__class__.__name__, str(self))
```
or:
```python
def __repr__(self):
return f'{self.__class__.__name__}({self.__str__()})'
```
Then it would be :
`${SUITE_METADATA} = NormalizedDict({'Hello': 'World', 'Test': '123'})`
We could also write it like this?
`robot.utils.NormalizedDict({'Hello': 'World', 'Test': '123'})`
But i think the former version would be good enough
What do you think?
| 0easy
|
Title: Topic 3. Decision tree regressor, MSE
Body: В примере по DecisionTreeRegressor неправильный расчет MSE в названии графика:
`plt.title("Decision tree regressor, MSE = %.2f" % np.sum((y_test - reg_tree_pred) ** 2))`
Нужно ещё поделить на количество наблюдений, предлагаю поправить так:
`plt.title("Decision tree regressor, MSE = %.4f" % (np.sum((y_test - reg_tree_pred) ** 2) / n_test))`
Файл:
https://github.com/Yorko/mlcourse.ai/blob/master/jupyter_english/topic03_decision_trees_kNN/topic3_decision_trees_kNN.ipynb
И в русской версии аналогично:
https://github.com/Yorko/mlcourse.ai/blob/master/jupyter_russian/topic03_decision_trees_knn/topic3_trees_knn.ipynb | 0easy
|
Title: Documentation Notebooks
Body: Hello, I was going through some documentation notebooks, and noticed that many of them ([Poincare Embeddings](https://github.com/RaRe-Technologies/gensim/blob/develop/docs/notebooks/Poincare%20Tutorial.ipynb), [WikiNews](https://github.com/RaRe-Technologies/gensim/blob/develop/docs/notebooks/wikinews-bigram-en.ipynb), [Varembed](https://github.com/RaRe-Technologies/gensim/blob/develop/docs/notebooks/Varembed.ipynb)) have been uploaded without being run to the end, they fail with errors halfway through.
@piskvorky (and others), would it be useful to have these updated? Or is someone from the RaRe / gensim team in charge of documentation? | 0easy
|
Title: [BUG] `id` passed through `dcc.Loading` not visible in DOM
Body: **Describe your context**
Hello guys 👋
I am currently trying to pass an `id` to the dcc.Loading component or its parent container and I would like the `id` to be visible in the DOM such that I can target the CSS of the components inside the `dcc.Loading` via ID.
Please provide us your environment, so we can easily reproduce the issue.
- replace the result of `pip list | grep dash` below
```
dash 2.17.0
dash-bootstrap-components 1.5.0
dash-core-components 2.0.0
dash-html-components 2.0.0
```
- if frontend related, tell us your Browser, Version and OS
- OS: [e.g. iOS]
- Browser: Chrome
- Version [e.g. 22]
**Describe the bug**
Let's take the example app below - what I would have expected is that there would be an html div visible with a className="loading" and an id="loading-id". However, if I provide the `className="loading"` I see a div but it does not have the className="loading" in the DOM nor does it have the id="loading-id" in the DOM.
When I switch this to `parent_className="loading"`, now I see a div with the className="loading", but I cannot attach an id to this parent container.
I am not a react expert, but from the source I can see that the `id` doesn't seem to be passed on in the return of the react component and is therefore not visible in the DOM? Is there any reason for that?
https://github.com/plotly/dash/blob/09252f8d2f690480cc468b2e015f9e2417dc90ad/components/dash-core-components/src/components/Loading.react.js#L128-L133
```
from dash import Dash, html, dcc, callback, Output, Input
import plotly.express as px
import pandas as pd
df = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/gapminder_unfiltered.csv')
app = Dash()
app.layout = [
html.H1(children='Title of Dash App', style={'textAlign':'center'}),
dcc.Dropdown(df.country.unique(), 'Canada', id='dropdown-selection'),
dcc.Loading(dcc.Graph(id='graph-content'), color='grey', id="loading-id", parent_className="loading")
]
@callback(
Output('graph-content', 'figure'),
Input('dropdown-selection', 'value')
)
def update_graph(value):
dff = df[df.country==value]
return px.line(dff, x='year', y='pop')
if __name__ == "__main__":
app.run(debug=True)
```
**Expected behavior**
I would expect the `id` being passed on to the react component and visible in the DOM, so having a <div class="loading" id="loading-id" </div> visible in the DOM.
**Screenshots**

| 0easy
|
Title: Bug: 'KNeighborsAlgorithm' object has no attribute 'classes_'
Body: Current behavior:
'KNeighborsAlgorithm' object has no attribute 'classes_'
Problem during computing permutation importance. Skipping ...
Expected: KNeighbors to be trained
| 0easy
|
Title: Support forward mode differentiation for SVI
Body: Hello everybody. I am encountering a problem with the VonMises distribution, and in particular with its concentration parameter. I am trying to perform a very simple MLE of a hierarchical model.
```
def model(X):
plate = numpyro.plate("data", Nc)
kappa = numpyro.param("kappa", 1., constraint = numpyro.distributions.constraints.positive)
mu = numpyro.param("mu", 0., constraint = numpyro.distributions.constraints.interval(-np.pi, np.pi))
with plate:
phi = numpyro.sample("phi", numpyro.distributions.VonMises(mu, kappa))
with plate:
numpyro.sample("X", numpyro.distributions.Normal(phi, 1.), obs=X)
def guide(X):
pass
```
When I run SVI, I get:
`ValueError: Reverse-mode differentiation does not work for lax.while_loop or lax.fori_loop with dynamic start/stop values. Try using lax.scan, or using fori_loop with static start/stop.`
Playing around with the model I noticed that if I fix the kappa (concentration parameter) to constant, and optimize only the mean of the VonMises, it works.
Also following the [docs](https://num.pyro.ai/en/stable/reparam.html#numpyro.infer.reparam.CircularReparam) I added on top of my function:
`@handlers.reparam(config={"phi": CircularReparam()})`
which changes the error message to:
`NotImplementedError: `
I finally tried (based on this) changing my sample to:
```
with plate:
with handlers.reparam(config={'phi': CircularReparam()}):
phi = numpyro.sample("phi", numpyro.distributions.VonMises(mu, 2.0))
```
Which also ends up with the `NotImplementedError.`
Is there a trick, or the VonMises distribution is just not well implemented yet?
Have a good day!
PS. HMC works with for sampling the conc parameter
A | 0easy
|
Title: Web版现已发布
Body: # Web版项目
[Johnserf-Seed/TikTokWeb](https://github.com/Johnserf-Seed/TikTokWeb)

| 0easy
|
Title: [RFC] Allow dotted environment variables
Body: **Is your feature request related to a problem? Please describe.**
Parameters defined in environment variables cannot be accessed with dotted key notation in the same way as parameters defined in settings files, which prevents overriding dotted key parameters with environment variables.
**Example:** Environment variable `DYNACONF_SERVICE_PORT` cannot be accessed via `settings['SERVICE.PORT']`. Instead, it must be accessed via `settings['SERVICE_PORT']`.
**Describe the solution you'd like**
One solution would be to use a designated sequence of characters to symbolize a dot separator, possibly two consecutive underscores ( `__` ). This character sequence could be configurable (eg `ENVVAR_DOT_SEPARATOR_FOR_DYNACONF`).
**Example:** Environment variable `DYNACONF_SERVICE__PORT` could be accessed via `settings['SERVICE.PORT']`, and would override any `service.port` parameter defined in a settings file.
**Describe alternatives you've considered**
An alternative to this could be to simply match either dotted or non-dotted key values to the correlated environment variable. As a note, this is how Spring Framework handles this feature.
**Example:** Environment variable `DYNACONF_SERVICE_PORT` could be accessed via `settings['SERVICE.PORT']` or via `settings['SERVICE_PORT']`, and would override any `service.port` or `service_port` parameter defined in a settings file.
**Additional context**
This would be especially nice for:
- allowing natural grouping of related settings (eg `service.host` and `service.port`)
- allowing overriding of existing parameters defined in a settings file
- accessing environment variables via object dot notation (eg `settings.SERVICE.PORT`)
| 0easy
|
Title: Extend Datalab to token classification (entity recognition) datasets
Body: Allow Datalab to find label issues when `labels` object is token classification annotations (per-word class labels for each sentence/document).
Related things to look at:
Extending Datalab to other ML tasks:
https://github.com/cleanlab/cleanlab/issues/774
https://github.com/cleanlab/cleanlab/issues/765
https://github.com/cleanlab/cleanlab/pull/796
Existing Datalab code for label issues:
https://github.com/cleanlab/cleanlab/blob/master/cleanlab/datalab/internal/issue_manager/label.py
Adding new issue manager in Datalab:
https://docs.cleanlab.ai/master/cleanlab/datalab/guide/custom_issue_manager.html
Existing Cleanlab code for token classification label issues:
https://github.com/cleanlab/cleanlab/tree/master/cleanlab/token_classification | 0easy
|
Title: heroku在休眠唤醒后,数据库会重置回初始状态
Body: heroku在休眠唤醒后,数据库会重置回初始状态,包括登录密码,网站设置跟商品都变回刚安装完的初始状态,不知道是脚本的问题还是heroku自身的问题。 | 0easy
|
Title: [Speed up] Currently Gaussian Noise is not optimized for separate uint8 and float32 treatment
Body: It could happen, that
```python
@clipped
def gauss_noise(image: np.ndarray, gauss: np.ndarray) -> np.ndarray:
image = image.astype("float32")
return image + gauss
```
could be optimized with something like:
```python
def gauss_noise_optimized(image: np.ndarray, gauss: np.ndarray) -> np.ndarray:
if image.dtype == np.float32:
gauss = gauss.astype(np.float32)
noisy_image = cv2.add(image, gauss)
elif image.dtype == np.uint8:
gauss = np.clip(gauss, 0, 255).astype(np.uint8)
noisy_image = cv2.add(image, gauss)
else:
raise TypeError("Unsupported image dtype. Expected uint8 or float32.")
return noisy_image
```
requires benchmarking, but technically it is a one function replacement Pull Request, that is good for a first issue.
Althouh more involved process could be done if it makes it even faster:
```python
@clipped
def _shift_rgb_non_uint8(img: np.ndarray, r_shift: float, g_shift: float, b_shift: float) -> np.ndarray:
if r_shift == g_shift == b_shift:
return img + r_shift
result_img = np.empty_like(img)
shifts = [r_shift, g_shift, b_shift]
for i, shift in enumerate(shifts):
result_img[..., i] = img[..., i] + shift
return result_img
def _shift_image_uint8(img: np.ndarray, value: np.ndarray) -> np.ndarray:
max_value = MAX_VALUES_BY_DTYPE[img.dtype]
lut = np.arange(0, max_value + 1).astype("float32")
lut += value
lut = np.clip(lut, 0, max_value).astype(img.dtype)
return cv2.LUT(img, lut)
@preserve_shape
def _shift_rgb_uint8(img: np.ndarray, r_shift: ScalarType, g_shift: ScalarType, b_shift: ScalarType) -> np.ndarray:
if r_shift == g_shift == b_shift:
height, width, channels = img.shape
img = img.reshape([height, width * channels])
return _shift_image_uint8(img, r_shift)
result_img = np.empty_like(img)
shifts = [r_shift, g_shift, b_shift]
for i, shift in enumerate(shifts):
result_img[..., i] = _shift_image_uint8(img[..., i], shift)
return result_img
def shift_rgb(img: np.ndarray, r_shift: ScalarType, g_shift: ScalarType, b_shift: ScalarType) -> np.ndarray:
if img.dtype == np.uint8:
return _shift_rgb_uint8(img, r_shift, g_shift, b_shift)
return _shift_rgb_non_uint8(img, r_shift, g_shift, b_shift)
``` | 0easy
|
Title: Getting/Setting slider values on OBS Studio 64
Body: I'm trying to get/set the range of a volume slider on OBS Studio 64 bit. https://obsproject.com/download
I'm on the latest version `21.1.2`. Here is my code:
```python
from pywinauto.application import Application
app = Application(backend='uia').connect(path='obs64.exe')
# Mixers area
mixers = app.top_window().child_window(title="Mixer", control_type="Window")
# the volume slider
slider = mixers.child_window(
title_re="Volume slider for 'Desktop Audio'",
control_type="Slider"
)
slider_wrapper = slider.wrapper_object()
print(slider_wrapper.min_value())
```
Here's the exception:
```
Traceback (most recent call last):
File "C:\Users\glenbot\AppData\Local\Programs\Python\Python36\lib\site-packages\pywinauto\uia_defines.py", line 232, in get_elem_interface
iface = cur_ptrn.QueryInterface(cls_name)
File "C:\Users\glenbot\AppData\Local\Programs\Python\Python36\lib\site-packages\comtypes\__init__.py", line 1158, in QueryInterface
self.__com_QueryInterface(byref(iid), byref(p))
ValueError: NULL COM pointer access
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:/Users/glenbot/Documents/code/simple-stream-deck/test.py", line 55, in <module>
print(slider_wrapper.min_value())
File "C:\Users\glenbot\AppData\Local\Programs\Python\Python36\lib\site-packages\pywinauto\controls\uia_controls.py", line 434, in min_value
return self.iface_range_value.CurrentMinimum
File "C:\Users\glenbot\AppData\Local\Programs\Python\Python36\lib\site-packages\pywinauto\controls\uiawrapper.py", line 131, in __get__
value = self.fget(obj)
File "C:\Users\glenbot\AppData\Local\Programs\Python\Python36\lib\site-packages\pywinauto\controls\uiawrapper.py", line 258, in iface_range_value
return uia_defs.get_elem_interface(elem, "RangeValue")
File "C:\Users\glenbot\AppData\Local\Programs\Python\Python36\lib\site-packages\pywinauto\uia_defines.py", line 234, in get_elem_interface
raise NoPatternInterfaceError()
pywinauto.uia_defines.NoPatternInterfaceError
```
I have attempted to set the slider using coordinates but the resolution of the click matched to decibel value gets difficult to predict. I would rather call `set_value()` but I get the same error. Any help would be appreciated :)
| 0easy
|
Title: [BUG] Unable to infer shapes for the Q and DQ nodes with INT16 data type.
Body: # Bug Report
### Is the issue related to model conversion?
<!-- If the ONNX checker reports issues with this model then this is most probably related to the converter used to convert the original framework model to ONNX. Please create this bug in the appropriate converter's GitHub repo (pytorch, tensorflow-onnx, sklearn-onnx, keras-onnx, onnxmltools) to get the best help. -->
No
### Describe the bug
<!-- Please describe the bug clearly and concisely -->
I am using `from onnx.utils import extract_model` to extract a subgraph from a `16bit QDQ` quantized model, but I find that `onnx.shape_inference.infer_shapes` is unable to infer shapes for `16bit QDQ` nodes. It works fine for the other nodes, including `8bit QDQ` nodes.
### System information
ONNX version: 1.17.0
Python version: 3.9.13
Protobuf version:3.20.3
### Reproduction instructions
<!--
- Describe the code to reproduce the behavior.
```
import onnx
model = onnx.load('model.onnx')
...
```
- Attach the ONNX model to the issue (where applicable)-->
### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
Since `8bit QDQ` nodes can infer shapes normally, I believe it should also apply to `16bit QDQ` nodes because there won't be any difference in terms of shape.
### Notes
<!-- Any additional information -->
In addition, I found that using `from onnxruntime.tools.symbolic_shape_infer import SymbolicShapeInference` works fine for `16bit QDQ` nodes. However, I think it might be unreasonable to additionally introduce onnxruntime for `extract_model` in onnx. | 0easy
|
Title: MSGate not in top level
Body: **Description of the issue**
`cirq.MSGate` does not exist, but `cirq.ops.MSGate` does. Think it should be at the top level too?
| 0easy
|
Title: Copy/paste errors in docstrings
Body: **Description of the issue**
Density matrix is mentioned in docstrings for `_BufferedStateVector`, `SimulationState._perform_measurement`, and `Simulator`. Should be changed to "state vector" and `QuantumStateRepresentation` respectively. | 0easy
|
Title: [Feature request] Add keypoint support to GridDropout
Body: Keypoint support comes quite naturally to Dropout transforms.
=> no reason why not to add such support to the GridDropout transform | 0easy
|
Title: Contribute `Diverging bar` to Vizro visual vocabulary
Body: ## Thank you for contributing to our visual-vocabulary! 🎨
Our visual-vocabulary is a dashboard, that serves a a comprehensive guide for selecting and creating various types of charts. It helps you decide when to use each chart type, and offers sample Python code using [Plotly](https://plotly.com/python/), and instructions for embedding these charts into a [Vizro](https://github.com/mckinsey/vizro) dashboard.
Take a look at the dashboard here: https://huggingface.co/spaces/vizro/demo-visual-vocabulary
The source code for the dashboard is here: https://github.com/mckinsey/vizro/tree/main/vizro-core/examples/visual-vocabulary
## Instructions
0. Get familiar with the dev set-up (this should be done already as part of the initial intro sessions)
1. Read through the [README](https://github.com/mckinsey/vizro/tree/main/vizro-core/examples/visual-vocabulary) of the visual vocabulary
2. Follow the steps to contribute a chart. Take a look at other examples. This [commit](https://github.com/mckinsey/vizro/pull/634/commits/417efffded2285e6cfcafac5d780834e0bdcc625) might be helpful as a reference to see which changes are required to add a chart.
3. Ensure the app is running without any issues via `hatch run example visual-vocabulary`
4. List out the resources you've used in the [README](https://github.com/mckinsey/vizro/tree/main/vizro-core/examples/visual-vocabulary)
5. Raise a PR
**Useful resources:**
- Data chart mastery: https://www.atlassian.com/data/charts/how-to-choose-data-visualization | 0easy
|
Title: UI Bug when trying to add user with email taken on Admin Panel
Body: **Environment**:
- CTFd Version/Commit: 3.2.1
- Operating System: Ubuntu 18.04 LTS
- Web Browser and Version:
**What happened?**
When trying to create a new user that already has its email taken by another user via the admin panel, nothing happens. Replicating the same result on the login page will at least throw up an error. Hopefully the same error message can be applied to the admin panel to prevent people from getting confused.

| 0easy
|
Title: Wrong facet function called when defining a custom group
Body: ## CKAN version
2.9, 2.10, master
## Describe the bug
When developer defines a custom group with `group_type` something else than `group`, `group_facets` is not called, instead ckan calls `organization_facets`
https://github.com/ckan/ckan/blob/71d5d1be495ec9662323eb69d6f71b2ccbb894f2/ckan/views/group.py#L378-L387
### Steps to reproduce
Create a custom group with group_type something else than group. Try to update facets of the said group.
### Expected behavior
`group_facets` should be called when its not an organization.
| 0easy
|
Title: Cannot create a enum with a deprecation reason supplied
Body: ## How to reproduce
```python
options = {
'description': 'This my enum',
'deprecation_reason': 'For the funs'}
graphene.Enum('MyEnum', [('some', 'data')], **options)
```
## What happened
```
File "/Users/Development/saleor/saleor/graphql/core/enums.py", line 35, in to_enum
return graphene.Enum(type_name, enum_data, **options)
File "/Users/Development/saleor-venv/lib/python3.7/site-packages/graphene/types/enum.py", line 49, in __call__
return cls.from_enum(PyEnum(*args, **kwargs), description=description)
TypeError: __call__() got an unexpected keyword argument 'deprecation_reason'
``` | 0easy
|
Title: Broken file link in `run_corpora_and_vector_spaces` tutorial
Body: #### Problem description
The `run_corpora_and_vector_spaces.ipynb` tutorial depends on a file on the web, and that file is missing.
#### Steps/code/corpus to reproduce
See https://groups.google.com/g/gensim/c/nX4lc8j0ZO0
#### Versions
Please provide the output of:
```python
import platform; print(platform.platform())
import sys; print("Python", sys.version)
import numpy; print("NumPy", numpy.__version__)
import scipy; print("SciPy", scipy.__version__)
import gensim; print("gensim", gensim.__version__)
from gensim.models import word2vec;print("FAST_VERSION", word2vec.FAST_VERSION)
```
Unknown (probably any). | 0easy
|
Title: PSI added to diff of numerical stat columns
Body: **Is your feature request related to a problem? Please describe.**
Within https://github.com/capitalone/DataProfiler/blob/main/dataprofiler/profilers/numerical_column_stats.py#L350
Need to add PSI - https://medium.com/model-monitoring-psi/population-stability-index-psi-ab133b0a5d42
Should be a helper function to calculate which is called withtin the `def diff` function.
**Describe the outcome you'd like:**
receiving PSI in the `diff` command of NumericalStatsMixin
Tests around the addition.
**Additional context:**
| 0easy
|
Title: [BUG] TimeSeries.from_group_dataframe incompatible with integer timestamps
Body: **Describe the bug**
`TimeSeries.from_group_dataframe` should support both timestamp and integer time columns, but it internally converts to `DatetimeIndex` regardless of what is passed.
https://github.com/unit8co/darts/blob/a646adf10fa73d8facb16a611f8a3682dc8a1191/darts/timeseries.py#L869-L871
**To Reproduce**
```python
import pandas as pd
import darts
df = pd.DataFrame({
"group": ["a", "a", "a", "b", "b", "b"],
"t": [0, 1, 2, 0, 1, 2],
"x": [1.0, 2.0, 3.0, 2.0, 3.0, 4.0],
})
ts = darts.TimeSeries.from_group_dataframe(df, group_cols="group", time_col="t")
ts[0].time_index
# DatetimeIndex([ '1970-01-01 00:00:00',
# '1970-01-01 00:00:00.000000001',
# '1970-01-01 00:00:00.000000002'],
# dtype='datetime64[ns]', name='t', freq='ns')
```
**Expected behavior**
`time_index` should be a `RangeIndex`.
**System (please complete the following information):**
- Python version: 3.10
- darts version 0.30.0
| 0easy
|
Title: Change `project-name`, `hide-request` and `hide-response` to use underscore
Body: Change `project-name`, `hide-request` and `hide-response` to use underscore
- `project_name` instead of `project-name`
- `hide_request` instead of `hide-request`
- `hide_response` instead of `hide-response`
https://github.com/scanapi/scanapi/blob/55e30aa92aed6a3a1131e35f357aff2fbdd61bc8/scanapi/reporter.py#L36
https://github.com/scanapi/scanapi/blob/55e30aa92aed6a3a1131e35f357aff2fbdd61bc8/scanapi/hide_utils.py#L14-L15 | 0easy
|
Title: [BUG] Index out of bounds on reading a dataframe
Body: **Describe the bug**
Trying to read in 10000+ row dataframe. Getting an error:
`IndexError: index 10001 is out of bounds for axis 0 with size 10001`
```
Traceback:
File "/app/app.py", line 83, in <module>
main()
File "/app/app.py", line 78, in main
analytics_tab()
File "/app/tabs/pygwalker_analytics/analytics_tab.py", line 146, in analytics_tab
renderer.render_explore()
File "/usr/local/lib/python3.11/site-packages/pygwalker/api/streamlit.py", line 201, in render_explore
html = self._get_html(**{"defaultTab": default_tab})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pygwalker/api/streamlit.py", line 147, in _get_html
return self._get_html_with_params_str_cache(params_str)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/cachetools/__init__.py", line 737, in wrapper
v = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pygwalker/api/streamlit.py", line 119, in _get_html_with_params_str_cache
props = self.walker._get_props("streamlit")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pygwalker/api/pygwalker.py", line 530, in _get_props
"fieldMetas": self.data_parser.field_metas,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pygwalker/data_parsers/base.py", line 138, in field_metas
duckdb.register("pygwalker_mid_table", self._duckdb_df)
```
**Versions**
- pygwalker version: 0.4.7
- python version: 3.11.8
**Additional context**
Similar issue was mentioned in: https://github.com/duckdb/duckdb/issues/10750
Recently DuckDB released [PR](https://github.com/duckdb/duckdb/pull/10768) to fix that issue with: [0.10.1 Bugfix Release ](https://github.com/duckdb/duckdb/releases/tag/v0.10.1).
I think this would be solved with DuckDB version requirement update from 0.10.0 -> 0.10.1
| 0easy
|
Title: Fix: Error using enforce_privacy = True
Body: https://github.com/gventuri/pandas-ai/blob/87dd966e52c21e6359f2797f80826cd49c6bd40f/pandasai/smart_dataframe/__init__.py#L436C57-L436C57
When using `enforce_privacy = True`
generates an error because it only sends the headers and when using `_truncate_head_columns` it tries to truncate 0 rows
so I propose the following:
```python
if self.lake.config.enforce_privacy:
return sampled_head
else:
return self._truncate_head_columns(sampled_head)
```
Instead of using:
```python
return self._truncate_head_columns(sampled_head)
```
This avoids the errors reported by users who use `enforce_privacy = True` | 0easy
|
Title: ISODate parse can use std library
Body: Currently `ISODate`'s `parse` uses `dateutil`:
https://github.com/betodealmeida/shillelagh/blob/7afaf13ec822f8c56895a8aec6ad77a7de2ea600/src/shillelagh/fields.py#L323-L328
however as of Python 3.7, [`date.fromisoformat`](https://docs.python.org/3/library/datetime.html#datetime.date.fromisoformat) should work and be faster + less permissive (it doesn't try to guess the format) | 0easy
|
Title: Fix blog link in docs
Body: ### Summary
The blog link in the footer needs to be fixed:
```diff
diff --git a/docs/docusaurus.config.ts b/docs/docusaurus.config.ts
index bc2aa8afb..6fea98a12 100644
--- a/docs/docusaurus.config.ts
+++ b/docs/docusaurus.config.ts
@@ -137,7 +137,7 @@ const config: Config = {
},
{
label: "Blog",
- to: "https://mlflow.org/releases",
+ to: "https://mlflow.org/blog",
},
],
},
```
### Notes
- Make sure to open a PR from a **non-master** branch.
- Sign off the commit using the `-s` flag when making a commit:
```sh
git commit -s -m "..."
# ^^ make sure to use this
```
- Include `#{issue_number}` (e.g. `#123`) in the PR description when opening a PR.
| 0easy
|
Title: Editing secrets wipes out previous keys
Body: **Describe the bug**
When editing a secret the previously set keys are not loaded as an option to update, when adding a new key to the existing secret it wipes out previous keys associated with the secret.
**To Reproduce**
- Create secret with multiple keys
- Edit the same secret to add or update existing key
- Existing keys aren't listed to update and adding a new key removes all previously associated keys. (see screenshot walkthrough)
**Expected behavior**
Would expect to be able to update existing keys and add new keys without wiping out previously configured keys in the secret. Scenario of just needing to update a rotated password shouldn't wipe out the username or other settings stored in secrets.
**Screenshots**
New secret with multiple keys


Editing secret, existing keys not listed to update if needed

Add new key to existing secret

Previous keys removed and only new key listed

| 0easy
|
Title: Check for existing asyncio event loops after forking
Body: It seems there are some cases where there is already an existing, active event loop after forking, which causes an exception when the child process tries to create a new event loop. It should check for an existing event loop first, and only create one if there's no active loop available.
Target function: https://github.com/jreese/aiomultiprocess/blob/master/aiomultiprocess/core.py#L93
See #4 for context. | 0easy
|
Title: Files has not been implemented
Body: 师傅,文件上传这块未来考虑实现吗,还是说现在有别的替代写法?
类似`requests.post(files=xxx)`的功能
```
File "D:\Programs\Python310\lib\site-packages\curl_cffi\requests\session.py", line 407, in request
req, buffer, header_buffer = self._set_curl_options(
File "D:\Programs\Python310\lib\site-packages\curl_cffi\requests\session.py", line 230, in _set_curl_options
raise NotImplementedError("Files has not been implemented.")
NotImplementedError: Files has not been implemented.
``` | 0easy
|
Title: missing the `Style` component
Body: https://www.w3.org/TR/html401/present/styles.html#edef-STYLE | 0easy
|
Title: Sphinx API documentation for `dask.config` shows the whole config
Body: https://docs.dask.org/en/stable/configuration.html#api
The rendered signature for `get`, `set`, `refresh` contains a snapshot of the whole dask config, which makes it hard and confusing to read. | 0easy
|
Title: Table alembic_version
Body: **Migrated issue, originally created by Mark Muzenhardt ([@redrooster](https://github.com/redrooster))**
If I want to have a connection between my application version and the alembic database version, how could I do that in the best way?
| 0easy
|
Title: On mobile I have to scroll down to see text input
Body: On Google pixel 6 I see this in a new chat:

I have to scroll down to see the input:

Was showing a family member chat on their phone today and this confused them.
| 0easy
|
Title: [ENHANCEMENT] Snowflake reader and writer have different parameter names for table name
Body: ## Describe the bug
SnowflakeReader and SnowflakeWriter use different names for the table name parameter (**table** and **dbtable** respectively). Unifying the parameter names will make easier to reuse the configuration associated to a Snowflake Table for both reading and writing without having to change the table parameter name.
| 0easy
|
Title: [BUG] Videoclips are too short for the actual script and stop at a still image.
Body: **Describe the bug**
In a video I generated most of the clips are too short and stop playing although the text/voice continues. Only after a new "paragraph" a new clip is loaded. The result is a still image until the paragraph is finished.
**Expected behavior**
The next clip is played when the previous one finishes instead of just stopping at a still image.
**Screenshots**
Sample video:
https://drive.google.com/file/d/1h_MSsMkskG3aUy775u0MRUgSiNBLQwO1/view
**Desktop (please complete the following information):**
- OS: Windows
- Browser Firefox
- Python Version 3.11
| 0easy
|
Title: sklearn_api `transform()` methods not compatible with generators
Body: Example (from [TfidfTransformer](https://github.com/RaRe-Technologies/gensim/blob/b3b844e32cf03c28e58586cbd8b66d288d41758d/gensim/sklearn_api/tfidf.py#L157))
```python
if isinstance(docs[0], tuple):
docs = [docs]
return [self.gensim_model[doc] for doc in docs]
```
This method expects a list of tuples, instead of an iterable. This means that the entire corpus has to be stored as a list in memory, instead of just the TFIDF matrix produced at the end. This is unfeasible for large datasets.
Why do we need to create a list from `docs`, instead of just doing:
```python
return (self.gensim_model[doc] for doc in docs)
``` | 0easy
|
Title: [Documentation] Don't state `ToGray` would apply a conditional inversion
Body: ## Documentation request
I think the documentation of the `ToGray` transform is wrong in stating that it would invert the image if its mean is greater than 127. It does not appear to do so, and the code looks like plain conversions via `cv2`, without a conditional inversion.
| 0easy
|
Title: [core] Split raylet cython file into multiple files
Body: ### Description
Our raylet cython file is a giant: https://github.com/ray-project/ray/blob/master/python/ray/_raylet.pyx, which spans for over 5K LOC.
Two issues:
- (minor) the file is too big, and hard to manage (i.e. add new code, delete old code, and glance over)
- (major) a single file is compiled in single thread for compilers; for now it takes 5-10 minutes every time to rebuild the target
Proposed solution:
- Split the raylet cython file into smaller files
- cython here is used to expose C++ impl for python usage, so all imports should be updated accordingly
This issue doesn't require people to split the target cleanly, even separating several functions out is good.
### Use case
_No response_ | 0easy
|
Title: Custom template variables in settings
Body: **Current behaviour**
Currently, in email templates, Django host URLs are getting populated.
**Better approach**
If as a developer, I want to handle the email activation through front-end. Then I need to set the pass the front-end URL in templates.
**The solution we can go with:**
We can set the frontend URL in settings and use that in required places in the package. | 0easy
|
Title: [BUG] DDG style bangs should support bang at the end
Body: **Describe the bug**
DuckDuckGo allows the user to specify bangs at either the start or the end of a custom command, as long as the short form is used: `!g` or `g!`. It would be nice if we had the same functionality here. | 0easy
|
Title: Static Type Hinting: `dataprofiler/validators/base_validators.py`
Body: **Is your feature request related to a problem? Please describe.**
Improve the project in the area of
- Make collaboration and code contribution easier
- Getting started with the code when a contributor is new to the code base
**Describe the outcome you'd like:**
Utilizing types from [mypy](https://mypy.readthedocs.io/en/stable/cheat_sheet_py3.html) documentation for python3, implement type hinting in `dataprofiler/validators/base_validators.py`.
**Additional context:**
- [mypy](https://mypy.readthedocs.io/en/stable/cheat_sheet_py3.html) documentation
- [Why use python types?](https://medium.com/vacatronics/why-you-should-consider-python-type-hints-770e5cb1570f)
| 0easy
|
Title: Correct Spelling & Add Link
Body: In the header section of the admin,
1. Change `Namah Shivayah` to `Namah Shivaya`
2. Add link to wiki of this repo as documentation lik

| 0easy
|
Title: Topic 2 Par 2, "median ($50\%)"
Body: Something went wrong with the formula "median ($50\%)" in the boxplot() explanation in the "3. Seaborn" part. Considering previous arcticle, the "median ($50\\%$)" should give correct display, not the "($50\\%)". | 0easy
|
Title: request to fix illogical and redundant code in ResourceProtector (solution provided).
Body: [Everywhere](https://docs.authlib.org/en/latest/flask/2/api.html#authlib.integrations.flask_oauth2.ResourceProtector) in documentation about ResourceProtector, we can find:
```python
from authlib.integrations.django_oauth2 import ResourceProtector, BearerTokenValidator
from django.http import JsonResponse
require_oauth = ResourceProtector()
require_oauth.register_token_validator(BearerTokenValidator(OAuth2Token))
```
This [example](https://docs.authlib.org/en/latest/django/2/resource-server.html) with an error (OAuth2Token is not imported) exists in many sources on the internet and probably in many projects. But I don't understand why it cannot be simply organized like this:
`require_oauth = ResourceProtector(token_validator=BearerTokenValidator(OAuth2Token))`
I don't offer to remove `.register_token_validator()` method completely, but i am sure: to add posibility to register `token_validator ` on `__init__` can reduce on 50% initializing code of ResourceProtector from Authlib in every project!
For example:
```python
class Protector(ResourceProtector):
def __init__(self, *args, **kwargs):
validator = kwargs.pop('validator', None)
super().__init__(*args, **kwargs)
validator and self.register_token_validator(validator)
``` | 0easy
|
Title: [Docs] Revise the documentation for layouts to use colours from the palette
Body: A recent PR introduced a [set of diagrams with very jarring colours](https://vizro.readthedocs.io/en/stable/pages/user-guides/layouts/#custom-layout-examples) that were chosen mostly to contrast strongly and clarify the layout grid. It should be possible to use colours from our standard palette rather the RGBM and this comment includes those we could try.
[Here is the palette](https://github.com/mckinsey/vizro/pull/507#pullrequestreview-2092972824)
[Here is the code in a notebook to create one of the graphics](https://github.com/stichbury/vizro_projects/blob/main/project.ipynb)
The CSS is in the same repo: https://github.com/stichbury/vizro_projects/ | 0easy
|
Title: FSDP Torch XLA vs. FSDPv2 (SMPD) Torch XLA checkpoint saving bug
Body: ### System Info
There is bug in how trainer (SFTTrainer) saves the checkpoint when we use FSDPv2 (SMPD) on TPU. This behavior does not show up with old method to run Torch XLA code ( xla_spawn.py). This behavior causes the new checkpoint to be almost exactly as the base model , throwing this error with PEFT
`Found missing adapter keys while loading the checkpoint: {missing_keys}`
even without PEFT, the weight of the models seems not affected by the training process.
The problem may related to how the saving function with FSDPv2 Torch XLA works in the trainer file. The same code is working 100% with GPU and also is working with xla_spawn.py FSDP method.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
To replicate save the code as sft.py and run it with PJRT_DEVICE=TPU XLA_USE_SPMD=1 python3 sft.py:
```
import torch
import torch_xla
import peft
import trl
import torch_xla.core.xla_model as xm
from datasets import load_dataset
from peft import LoraConfig,PeftModel
from transformers import AutoTokenizer, AutoModelForCausalLM, TrainingArguments
from trl import SFTTrainer, SFTConfig
import wandb
wandb.init(mode="disabled")
device = xm.xla_device() # Set up TPU device.
print(device)
def train():
model_id = "meta-llama/Llama-3.2-1B-Instruct"
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained(model_id)
tokenizer.pad_token = tokenizer.eos_token
data = load_dataset("philschmid/dolly-15k-oai-style",split="train")
lora_config = LoraConfig(r=8,target_modules=["k_proj", "v_proj"],task_type="CAUSAL_LM")
fsdp_config = {'fsdp_transformer_layer_cls_to_wrap': ['LlamaDecoderLayer'], 'xla': True, 'xla_fsdp_v2': True, 'xla_fsdp_grad_ckpt': True}
args=SFTConfig(
per_device_train_batch_size=8,
num_train_epochs=1,
max_steps=-1,
output_dir="output",
optim="adafactor",
logging_steps=50,
learning_rate=2e-5,
max_seq_length=2048,
packing=True,
dataset_text_field=None,
save_strategy="no",
dataloader_drop_last = True, # Required for SPMD.
fsdp="full_shard",
fsdp_config=fsdp_config)
trainer = SFTTrainer(
model=model,
train_dataset=data,
tokenizer = tokenizer,
args=args,
peft_config=lora_config)
trainer.train()
final_model=trainer.model
final_model.to("cpu")
final_model.save_pretrained("./LoRa")
if __name__ == "__main__":
train()
```
You will notice in the output folder, that the saved model is not in LoRa format (not two adapter files adapter_config.json adapter_model.safetensors). This is because with FSDPv2, we will ended up here (You can check by adding print statement).
https://github.com/huggingface/transformers/blob/62db3e6ed67a74cc1ed1436acd9973915c0a4475/src/transformers/trainer.py#L3821
However, if we use the same code with GPU or with old xla_spawn (FSDP) method, this issue will disappear. To replicate the same code with FSDP first run
`wget https://raw.githubusercontent.com/huggingface/transformers/refs/heads/main/examples/pytorch/xla_spawn.py
`
then save the below code and run it with python3 xla_spawn --num_cores x sft.py :
```
from datasets import load_dataset
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers import TrainingArguments
from trl import SFTTrainer,SFTConfig
import os
from peft import LoraConfig, get_peft_model, PeftModel
from transformers import AutoTokenizer, AutoModelForCausalLM, TrainingArguments, BitsAndBytesConfig
import transformers
import wandb
wandb.init(mode="disabled")
def main():
data = load_dataset("philschmid/dolly-15k-oai-style",split="train")
model_id = "meta-llama/Llama-3.2-1B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
tokenizer.add_special_tokens({'pad_token': tokenizer.eos_token})
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16)
#target_modules=["k_proj", "v_proj","embed_tokens", "lm_head"]
lora_config = LoraConfig(
r=16,
lora_alpha=32,
lora_dropout=0.05,
bias="none",
target_modules=["q_proj", "k_proj", "v_proj","embed_tokens", "lm_head"],
task_type="CAUSAL_LM",
)
trainer = SFTTrainer(
model=model,
train_dataset=data,
args=SFTConfig(
per_device_train_batch_size=1,
num_train_epochs=3,
max_steps=-1,
output_dir="./output",
logging_steps=50,
learning_rate=5e-5,
max_seq_length=2048,
save_steps=1000000,
save_only_model=True,
packing=True,
dataset_num_proc=40,
),
peft_config=lora_config,
)
trainer.train()
final_model=trainer.model
final_model.to("cpu")
final_model.save_pretrained("./LoRa")
def _mp_fn(index):
# For xla_spawn (TPUs)
main()
if __name__ == "__main__":
main()
```
With this code everything works great! because the saving function will ended up here:
https://github.com/huggingface/transformers/blob/62db3e6ed67a74cc1ed1436acd9973915c0a4475/src/transformers/trainer.py#L3824
I merged the LoRa adapter with the base model and the generated output is as expected from a finetuned model!
Finally, please note that this issue is not related to PEFT, because even if you use SFTTrainer without PEFT, this issue still exist. I believe it has to do with how we save checkpoint with FSDPv2 when we use TPUs.
### Expected behavior
The model with LoRa should save two adapter files and when we merge LoRa with the base model we should not have this message (You should update PEFT to the latest version (0.14.0) as it adds additional check to detect problems with LoRa checkpoints.) :
`Found missing adapter keys while loading the checkpoint: {missing_keys}`
| 0easy
|
Title: `NotRequired` and `Required` not properly handled with Python < 3.11 in `TypedDict` conversion
Body: Hi All, I have a few classes that use the NotRequired notation from typing_extensions_4.8.0 and python 3.9.13
```
class Signal(TypedDict):
Network: str
Signal: str
class WriteSignal(Signal):
Value: str
Interval: NotRequired[int]
```
This worked fine in RF 6 but when trying to upgrade my project to RF7, I have been encountering this error when running robot tests using the keyword that utilizes this class:
> [ ERROR ] Error in library 'RobotLibrary': Adding keyword 'CANWrite' failed: 'typing_extensions.NotRequired[int]' does not accept parameters, 'typing_extensions.NotRequired[int][int]' has 1.
I am sure I am overlooking something simple here, but I have everything I have tried doesn't seem to work and I have not been able to figure out the solution to this. I can tell this error is raised by [typeinfo](https://github.com/robotframework/robotframework/blob/master/src/robot/running/arguments/typeinfo.py) _report_nested_error, but I don't see any syntactical error with what i'm doing here. If I prevent this code from running, everything works like in RF 6
Does anyone have an idea of how I can get around this error? What am I missing here? | 0easy
|
Title: 🛡️ Documentation - Migrate all documentation to CSP compliant examples
Body: ### Overview
There are multiple locations, see details below, in our documentation that have examples of adding JavaScript and sometimes styles. As it stands, these are often provided in a way that will not be CSP compliant and not all developers will be easily able to add `nonce` attributes or other approaches.
Instead we should try to help our community by providing examples using external scripts or where suitable our recommended Stimulus approaches.
### Details
1. [ ] [`docs/extending/admin_views.md`](https://github.com/wagtail/wagtail/blob/main/docs/extending/admin_views.md?plain=1#L86) - `style` tag, to be changed to an external style file.
2. [ ] [`docs/advanced_topics/documents/title_generation_on_upload.md`](https://github.com/wagtail/wagtail/blob/main/docs/advanced_topics/documents/title_generation_on_upload.md?plain=1) JS inline script usage.
3. [ ] [`docs/advanced_topics/images/title_generation_on_upload.md`](https://github.com/wagtail/wagtail/blob/main/docs/advanced_topics/images/title_generation_on_upload.md?plain=1) Probably best done with the other title gen doc, they are 90% the same.
4. [ ] [`docs/reference/hooks.md`](https://github.com/wagtail/wagtail/blob/main/docs/reference/hooks.md?plain=1) JS inline script usage for the fireworks example. This would be a perfect example for a simple Stimulus controller.
5. [ ] Update [`docs/contributing/documentation_guidelines.md`](https://github.com/wagtail/wagtail/blob/main/docs/contributing/documentation_guidelines.md?plain=1) to add a new section at the bottom called '## Code example considerations'. Add a comment to this issue with what you think should go here, but we probably want to call out something simple about CSP compliance, accessibility compliance.
### Approach
* For each usage of `<script>` tags with inline script, provide an updated example that provides the JS code in an external file. See https://security.stackexchange.com/questions/135912/what-is-an-inline-script
* Ensure you fix any escaping, spacing and explain the file location with a comment at the start.
* We can just use the `mysite.js` as a consistent filename as this is what we prepare in the new Wagtail tutorial.
* An example of something like this already existing is on the panels page, see how we provide the example for [`static/js/inline-panel.js`](https://docs.wagtail.org/en/stable/reference/pages/panels.html#javascript-dom-events)
* Reminder, we need to keep the intent of the existing documentation, e.g. the hooks page is to explain how the specific hooks work. It's only the way the JS is added that will change, instead of an inline script it will be an external script. We want examples to be simple, if there's a way to simplify the JS (within reason), that's good also.
### Working on this
* Anyone can contribute to this. View our [contributing guidelines](https://docs.wagtail.org/en/latest/contributing/index.html).
* This would be a great task for some new contributors, however, please pick up one at a time. Simply add a comment as to which one you will be doing and get your PR up.
* Please also ensure you reference this issue when you raise a PR, it's very difficult to track otherwise.
### Additional context
* This is similar to the uplift we did a few years ago to avoid using jQuery in docs examples - https://github.com/wagtail/wagtail/pull/7658
* Part of our work to support CSP compliance https://github.com/wagtail/wagtail/issues/1288
* There are some examples of inline `style` that we probably want to leave in, but maybe we should consider adding a warning? e.g. [`images/focal_points.md`](https://github.com/wagtail/wagtail/blob/main/docs/advanced_topics/images/focal_points.md?plain=1#L18)
| 0easy
|
Title: Record token usage for /index commands
Body: We need to be able to keep track of the token usage that occurs when using /index commands. Currently, this is not tracked at all. passing in llm_predictor into the query commands for the gpt-index indices seem to give me weird output for the answer of the query, would love some help on this. | 0easy
|
Title: Code Changes Lines metric API
Body: The canonical definition is here: https://chaoss.community/?p=3591 | 0easy
|
Title: Wrong __repr__ param of Linkage
Body: it should be `coxia_axis` not `new_axis`
https://github.com/mithi/hexapod-robot-simulator/blob/030cdf7b6b293fc38a04ca8a68ee2527f1f4c8d7/hexapod/linkage.py#L193 | 0easy
|
Title: Debug execute messages
Body: Could be possible set logger to debug execution messages?
in aiomysql there's a "echo=true" option that helps a lot debugging transactions and others problems, could this be possible? | 0easy
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.