text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: Add alpha transparency to RadViz
Body: To make the RadViz a bit easier to read, we can add optional transparency, set by the user to be able to distinguish regions of more or less density. | 0easy
|
Title: [Feature Request] tabIndex of Div should also accept number type
Body: In the origin React, parameter `tabIndex` could accept number type:

It would be better to add number type support for `tabIndex`:

| 0easy
|
Title: [Feature request] Add apply_to_images to BaseDropout
Body: | 0easy
|
Title: Update development documentation
Body: The developer page in the documentation needs updating. In particular, the instructions for serving locally and staticfiles are now out of date. Other parts need reviewing also.
| 0easy
|
Title: .coverage and response.html from Tox are not gitignored
Body: **Describe the bug**
When running tox locally (on Windows anyway) .coverage and response.html get generated on each tox run for tests. This muddles up the repo a bit.
**To Reproduce**
Run tox on a windows machine
Note '.coverage.<machine>.<date>' files and 'response.html' are created in the repo.
**Versions (please complete the following information):**
Seems to happen on every tox run to me.
**Expected behavior**
These files should be in .gitignore
**Screenshots**

**Additional context**
| 0easy
|
Title: [Feature request] Add balanced scale to `Affine`
Body: From FAQ https://albumentations.ai/docs/faq/#how-to-perform-balanced-scaling
The default scaling logic in `RandomScale`, `ShiftScaleRotate`, and `Affine` transformations is biased towards upscaling.
For example, if `scale_limit = (0.5, 2)`, a user might expect that the image will be scaled down in half of the cases and scaled up in the other half. However, in reality, the image will be scaled up in 75% of the cases and scaled down in only 25% of the cases. This is because the default behavior samples uniformly from the interval `[0.5, 2]`, and the interval `[0.5, 1]` is three times smaller than `[1, 2]`.
To achieve balanced scaling, you can use the OneOf transform as follows:
```python
balanced_scale_transform = A.OneOf([
A.Affine(scale=(0.5, 1), p=0.5),
A.Affine(scale=(1, 2), p=0.5)])
```
This approach ensures that exactly half of the samples will be upscaled and half will be downscaled.
-----
We could just add `balanced_scale` to `Affine` that will do it under the hood | 0easy
|
Title: posargs break when run on a subst'd drive
Body: ## Issue
Resolution of the absolute paths for posargs arguments fails when the tox root directory is located on a drive mapped with subst.
This happens, when e.g. a pytest test file is specified on the command line.
In my case, the sources are mapped to a Windows "drive" from inside a WSL distro (using `subst W: \\wsl$\distro\path`), but it also can be reproduced in Windows only.
The error is a combination of different aspects:
- pathlib's resolve() is called for the root path in all cases, even when it is already absolute (config/main.py:101, `make()`).
- resolve() does also resolve the subst drive letter to the original path, changing the drive and path prefix
- not all paths are resolve()d during the processing of posargs
- os.path.relpath() gets called to strip the prefix from posargs up to the tox root, but fails due to different drive letters and prefixes
## Environment
- OS: Windows 10
- Python 3.11.4 64bit
- tox 4.7.0
<details open>
<summary>Output of <code>pip list</code> of the host Python, where <code>tox</code> is installed</summary>
```console
Package Version
------------------------- -----------
annotated-types 0.5.0
attrs 23.1.0
black 23.7.0
build 0.10.0
CacheControl 0.12.14
cachetools 5.3.1
certifi 2023.7.22
chardet 5.2.0
charset-normalizer 3.2.0
cleo 2.0.1
click 8.1.5
colorama 0.4.6
crashtest 0.4.1
distlib 0.3.7
dulwich 0.21.5
filelock 3.12.2
flake8 6.0.0
html5lib 1.1
idna 3.4
importlib-metadata 6.8.0
installer 0.7.0
jaraco.classes 3.3.0
jsonschema 4.19.0
jsonschema-specifications 2023.7.1
keyring 23.13.1
lockfile 0.12.2
mccabe 0.7.0
more-itertools 10.1.0
msgpack 1.0.5
mypy 1.4.1
mypy-extensions 1.0.0
packaging 23.1
pathspec 0.11.1
pexpect 4.8.0
pip 23.2.1
pkginfo 1.9.6
platformdirs 3.10.0
pluggy 1.2.0
poetry 1.5.1
poetry-core 1.6.1
poetry-plugin-export 1.4.0
ptyprocess 0.7.0
pycodestyle 2.10.0
pydantic 2.0.3
pydantic_core 2.3.0
pyflakes 3.0.1
pyproject-api 1.5.3
pyproject_hooks 1.0.0
pywin32-ctypes 0.2.2
rapidfuzz 2.15.1
referencing 0.30.2
requests 2.31.0
requests-toolbelt 1.0.0
rpds-py 0.9.2
setuptools 65.5.0
shellingham 1.5.0.post1
six 1.16.0
tomlkit 0.12.1
tox 4.7.0
trove-classifiers 2023.7.6
typing_extensions 4.7.1
urllib3 1.26.16
virtualenv 20.24.2
webencodings 0.5.1
zipp 3.16.2
```
</details>
## Output of running tox
<details open>
<summary>Output of <code>tox -rvv -- test.py</code> on the subst drive</summary>
```console
O:\tox_resolve_test\> tox -rvv -- test.py
py: 345 W remove tox env folder C:\_work\py\tox_resolve_test\.tox\py [tox\tox_env\api.py:322]
py: 601 I find interpreter for spec PythonSpec(path=C:\Toolchain\Python\Python311\python.exe) [virtualenv\discovery\builtin.py:58]
py: 601 I proposed PythonInfo(spec=CPython3.11.4.final.0-64, exe=C:\Toolchain\Python\Python311\python.exe, platform=win32, version='3.11.4 (tags/v3.11.4:d2340ef, Jun 7 2023, 05:45:37) [MSC v.1934 64 bit (AMD64)]', encoding_fs_io=utf-8-utf-8) [virtualenv\discovery\builtin.py:65]
py: 601 D accepted PythonInfo(spec=CPython3.11.4.final.0-64, exe=C:\Toolchain\Python\Python311\python.exe, platform=win32, version='3.11.4 (tags/v3.11.4:d2340ef, Jun 7 2023, 05:45:37) [MSC v.1934 64 bit (AMD64)]', encoding_fs_io=utf-8-utf-8) [virtualenv\discovery\builtin.py:67]
py: 601 D symlink on filesystem does not work [virtualenv\info.py:45]
py: 601 D filesystem is not case-sensitive [virtualenv\info.py:26]
py: 665 I create virtual environment via CPython3Windows(dest=C:\_work\py\tox_resolve_test\.tox\py, clear=False, no_vcs_ignore=False, global=False) [virtualenv\run\session.py:50]
py: 665 D create folder C:\_work\py\tox_resolve_test\.tox\py\Lib\site-packages [virtualenv\util\path\_sync.py:12]
py: 665 D create folder C:\_work\py\tox_resolve_test\.tox\py\Scripts [virtualenv\util\path\_sync.py:12]
py: 665 D write C:\_work\py\tox_resolve_test\.tox\py\pyvenv.cfg [virtualenv\create\pyenv_cfg.py:32]
py: 665 D home = C:\Toolchain\Python\Python311 [virtualenv\create\pyenv_cfg.py:36]
py: 665 D implementation = CPython [virtualenv\create\pyenv_cfg.py:36]
py: 665 D version_info = 3.11.4.final.0 [virtualenv\create\pyenv_cfg.py:36]
py: 665 D virtualenv = 20.24.2 [virtualenv\create\pyenv_cfg.py:36]
py: 665 D include-system-site-packages = false [virtualenv\create\pyenv_cfg.py:36]
py: 665 D base-prefix = C:\Toolchain\Python\Python311 [virtualenv\create\pyenv_cfg.py:36]
py: 665 D base-exec-prefix = C:\Toolchain\Python\Python311 [virtualenv\create\pyenv_cfg.py:36]
py: 665 D base-executable = C:\Toolchain\Python\Python311\python.exe [virtualenv\create\pyenv_cfg.py:36]
py: 665 D copy C:\Toolchain\Python\Python311\Lib\venv\scripts\nt\python.exe to C:\_work\py\tox_resolve_test\.tox\py\Scripts\python.exe [virtualenv\util\path\_sync.py:40]
py: 665 D copy C:\Toolchain\Python\Python311\Lib\venv\scripts\nt\pythonw.exe to C:\_work\py\tox_resolve_test\.tox\py\Scripts\pythonw.exe [virtualenv\util\path\_sync.py:40]
py: 665 D create virtualenv import hook file C:\_work\py\tox_resolve_test\.tox\py\Lib\site-packages\_virtualenv.pth [virtualenv\create\via_global_ref\api.py:91]
py: 665 D create C:\_work\py\tox_resolve_test\.tox\py\Lib\site-packages\_virtualenv.py [virtualenv\create\via_global_ref\api.py:94]
py: 665 D ============================== target debug ============================== [virtualenv\run\session.py:52]
py: 665 D debug via 'C:\_work\py\tox_resolve_test\.tox\py\Scripts\python.exe' 'C:\Toolchain\Python\Python311\Lib\site-packages\virtualenv\create\debug.py' [virtualenv\create\creator.py:200]
py: 665 D {
"sys": {
"executable": "C:\\_work\\py\\tox_resolve_test\\.tox\\py\\Scripts\\python.exe",
"_base_executable": "C:\\Toolchain\\Python\\Python311\\python.exe",
"prefix": "C:\\_work\\py\\tox_resolve_test\\.tox\\py",
"base_prefix": "C:\\Toolchain\\Python\\Python311",
"real_prefix": null,
"exec_prefix": "C:\\_work\\py\\tox_resolve_test\\.tox\\py",
"base_exec_prefix": "C:\\Toolchain\\Python\\Python311",
"path": [
"C:\\Toolchain\\Python\\Python311\\python311.zip",
"C:\\Toolchain\\Python\\Python311\\DLLs",
"C:\\Toolchain\\Python\\Python311\\Lib",
"C:\\Toolchain\\Python\\Python311",
"C:\\_work\\py\\tox_resolve_test\\.tox\\py",
"C:\\_work\\py\\tox_resolve_test\\.tox\\py\\Lib\\site-packages"
],
"meta_path": [
"<class '_virtualenv._Finder'>",
"<class '_frozen_importlib.BuiltinImporter'>",
"<class '_frozen_importlib.FrozenImporter'>",
"<class '_frozen_importlib_external.PathFinder'>"
],
"fs_encoding": "utf-8",
"io_encoding": "cp1252"
},
"version": "3.11.4 (tags/v3.11.4:d2340ef, Jun 7 2023, 05:45:37) [MSC v.1934 64 bit (AMD64)]",
"makefile_filename": "C:\\Toolchain\\Python\\Python311\\Lib\\config\\Makefile",
"os": "<module 'os' (frozen)>",
"site": "<module 'site' (frozen)>",
"datetime": "<module 'datetime' from 'C:\\\\Toolchain\\\\Python\\\\Python311\\\\Lib\\\\datetime.py'>",
"math": "<module 'math' (built-in)>",
"json": "<module 'json' from 'C:\\\\Toolchain\\\\Python\\\\Python311\\\\Lib\\\\json\\\\__init__.py'>"
} [virtualenv\run\session.py:53]
py: 816 I add seed packages via FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=C:\Users\MYUSER\AppData\Local\pypa\virtualenv) [virtualenv\run\session.py:57]
py: 816 D install pip from wheel C:\Toolchain\Python\Python311\Lib\site-packages\virtualenv\seed\wheels\embed\pip-23.2.1-py3-none-any.whl via CopyPipInstall [virtualenv\seed\embed\via_app_data\via_app_data.py:49]
py: 816 D install setuptools from wheel C:\Toolchain\Python\Python311\Lib\site-packages\virtualenv\seed\wheels\embed\setuptools-68.0.0-py3-none-any.whl via CopyPipInstall [virtualenv\seed\embed\via_app_data\via_app_data.py:49]
py: 816 D install wheel from wheel C:\Toolchain\Python\Python311\Lib\site-packages\virtualenv\seed\wheels\embed\wheel-0.41.0-py3-none-any.whl via CopyPipInstall [virtualenv\seed\embed\via_app_data\via_app_data.py:49]
py: 816 D copy directory C:\Users\MYUSER\AppData\Local\pypa\virtualenv\wheel\3.11\image\1\CopyPipInstall\pip-23.2.1-py3-none-any\pip to C:\_work\py\tox_resolve_test\.tox\py\Lib\site-packages\pip [virtualenv\util\path\_sync.py:40]
py: 816 D copy C:\Users\MYUSER\AppData\Local\pypa\virtualenv\wheel\3.11\image\1\CopyPipInstall\setuptools-68.0.0-py3-none-any\distutils-precedence.pth to C:\_work\py\tox_resolve_test\.tox\py\Lib\site-packages\distutils-precedence.pth [virtualenv\util\path\_sync.py:40]
py: 816 D copy directory C:\Users\MYUSER\AppData\Local\pypa\virtualenv\wheel\3.11\image\1\CopyPipInstall\wheel-0.41.0-py3-none-any\wheel to C:\_work\py\tox_resolve_test\.tox\py\Lib\site-packages\wheel [virtualenv\util\path\_sync.py:40]
py: 816 D copy directory C:\Users\MYUSER\AppData\Local\pypa\virtualenv\wheel\3.11\image\1\CopyPipInstall\setuptools-68.0.0-py3-none-any\pkg_resources to C:\_work\py\tox_resolve_test\.tox\py\Lib\site-packages\pkg_resources [virtualenv\util\path\_sync.py:40]
py: 854 D copy directory C:\Users\MYUSER\AppData\Local\pypa\virtualenv\wheel\3.11\image\1\CopyPipInstall\wheel-0.41.0-py3-none-any\wheel-0.41.0.dist-info to C:\_work\py\tox_resolve_test\.tox\py\Lib\site-packages\wheel-0.41.0.dist-info [virtualenv\util\path\_sync.py:40]
py: 871 D copy C:\Users\MYUSER\AppData\Local\pypa\virtualenv\wheel\3.11\image\1\CopyPipInstall\wheel-0.41.0-py3-none-any\wheel-0.41.0.virtualenv to C:\_work\py\tox_resolve_test\.tox\py\Lib\site-packages\wheel-0.41.0.virtualenv [virtualenv\util\path\_sync.py:40]
py: 878 D generated console scripts wheel3.11.exe wheel.exe wheel-3.11.exe wheel3.exe [virtualenv\seed\embed\via_app_data\pip_install\base.py:43]
py: 887 D copy directory C:\Users\MYUSER\AppData\Local\pypa\virtualenv\wheel\3.11\image\1\CopyPipInstall\setuptools-68.0.0-py3-none-any\setuptools to C:\_work\py\tox_resolve_test\.tox\py\Lib\site-packages\setuptools [virtualenv\util\path\_sync.py:40]
py: 1124 D copy directory C:\Users\MYUSER\AppData\Local\pypa\virtualenv\wheel\3.11\image\1\CopyPipInstall\setuptools-68.0.0-py3-none-any\setuptools-68.0.0.dist-info to C:\_work\py\tox_resolve_test\.tox\py\Lib\site-packages\setuptools-68.0.0.dist-info [virtualenv\util\path\_sync.py:40]
py: 1124 D copy C:\Users\MYUSER\AppData\Local\pypa\virtualenv\wheel\3.11\image\1\CopyPipInstall\setuptools-68.0.0-py3-none-any\setuptools-68.0.0.virtualenv to C:\_work\py\tox_resolve_test\.tox\py\Lib\site-packages\setuptools-68.0.0.virtualenv [virtualenv\util\path\_sync.py:40]
py: 1124 D copy directory C:\Users\MYUSER\AppData\Local\pypa\virtualenv\wheel\3.11\image\1\CopyPipInstall\setuptools-68.0.0-py3-none-any\_distutils_hack to C:\_work\py\tox_resolve_test\.tox\py\Lib\site-packages\_distutils_hack [virtualenv\util\path\_sync.py:40]
py: 1140 D generated console scripts [virtualenv\seed\embed\via_app_data\pip_install\base.py:43]
py: 1399 D copy directory C:\Users\MYUSER\AppData\Local\pypa\virtualenv\wheel\3.11\image\1\CopyPipInstall\pip-23.2.1-py3-none-any\pip-23.2.1.dist-info to C:\_work\py\tox_resolve_test\.tox\py\Lib\site-packages\pip-23.2.1.dist-info [virtualenv\util\path\_sync.py:40]
py: 1399 D copy C:\Users\MYUSER\AppData\Local\pypa\virtualenv\wheel\3.11\image\1\CopyPipInstall\pip-23.2.1-py3-none-any\pip-23.2.1.virtualenv to C:\_work\py\tox_resolve_test\.tox\py\Lib\site-packages\pip-23.2.1.virtualenv [virtualenv\util\path\_sync.py:40]
py: 1415 D generated console scripts pip.exe pip3.11.exe pip-3.11.exe pip3.exe [virtualenv\seed\embed\via_app_data\pip_install\base.py:43]
py: 1415 I add activators for Bash, Batch, Fish, Nushell, PowerShell, Python [virtualenv\run\session.py:63]
py: 1417 D write C:\_work\py\tox_resolve_test\.tox\py\pyvenv.cfg [virtualenv\create\pyenv_cfg.py:32]
py: 1417 D home = C:\Toolchain\Python\Python311 [virtualenv\create\pyenv_cfg.py:36]
py: 1417 D implementation = CPython [virtualenv\create\pyenv_cfg.py:36]
py: 1417 D version_info = 3.11.4.final.0 [virtualenv\create\pyenv_cfg.py:36]
py: 1417 D virtualenv = 20.24.2 [virtualenv\create\pyenv_cfg.py:36]
py: 1417 D include-system-site-packages = false [virtualenv\create\pyenv_cfg.py:36]
py: 1417 D base-prefix = C:\Toolchain\Python\Python311 [virtualenv\create\pyenv_cfg.py:36]
py: 1417 D base-exec-prefix = C:\Toolchain\Python\Python311 [virtualenv\create\pyenv_cfg.py:36]
py: 1417 D base-executable = C:\Toolchain\Python\Python311\python.exe [virtualenv\create\pyenv_cfg.py:36]
py: 1417 E internal error [tox\session\cmd\run\single.py:59]
Traceback (most recent call last):
File "C:\Toolchain\Python\Python311\Lib\site-packages\tox\config\loader\ini\__init__.py", line 75, in replacer
replaced = replace(conf, self, raw_, args_) # do replacements
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Toolchain\Python\Python311\Lib\site-packages\tox\config\loader\ini\replace.py", line 57, in replace
return Replacer(conf, loader, conf_args=args, depth=depth).join(find_replace_expr(value))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Toolchain\Python\Python311\Lib\site-packages\tox\config\loader\ini\replace.py", line 186, in join
return "".join(self(value))
^^^^^^^^^^^
File "C:\Toolchain\Python\Python311\Lib\site-packages\tox\config\loader\ini\replace.py", line 183, in __call__
return [self._replace_match(me) if isinstance(me, MatchExpression) else str(me) for me in value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Toolchain\Python\Python311\Lib\site-packages\tox\config\loader\ini\replace.py", line 183, in <listcomp>
return [self._replace_match(me) if isinstance(me, MatchExpression) else str(me) for me in value]
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Toolchain\Python\Python311\Lib\site-packages\tox\config\loader\ini\replace.py", line 202, in _replace_match
replace_value = replace_pos_args(self.conf, args, conf_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Toolchain\Python\Python311\Lib\site-packages\tox\config\loader\ini\replace.py", line 313, in replace_pos_args
pos_args = conf.pos_args(to_path)
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Toolchain\Python\Python311\Lib\site-packages\tox\config\main.py", line 62, in pos_args
relative = os.path.relpath(path_arg_str, to_path_str)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen ntpath>", line 754, in relpath
ValueError: path is on mount 'O:', start on mount 'C:'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Toolchain\Python\Python311\Lib\site-packages\tox\session\cmd\run\single.py", line 47, in _evaluate
code, outcomes = run_commands(tox_env, no_test)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Toolchain\Python\Python311\Lib\site-packages\tox\session\cmd\run\single.py", line 84, in run_commands
status_main = run_command_set(tox_env, "commands", chdir, ignore_errors, outcomes)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Toolchain\Python\Python311\Lib\site-packages\tox\session\cmd\run\single.py", line 103, in run_command_set
command_set: list[Command] = tox_env.conf[key]
~~~~~~~~~~~~^^^^^
File "C:\Toolchain\Python\Python311\Lib\site-packages\tox\config\sets.py", line 118, in __getitem__
return self.load(item)
^^^^^^^^^^^^^^^
File "C:\Toolchain\Python\Python311\Lib\site-packages\tox\config\sets.py", line 129, in load
return config_definition.__call__(self._conf, self.loaders, ConfigLoadArgs(chain, self.name, self.env_name))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Toolchain\Python\Python311\Lib\site-packages\tox\config\of_type.py", line 97, in __call__
value = loader.load(key, self.of_type, self.factory, conf, args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Toolchain\Python\Python311\Lib\site-packages\tox\config\loader\api.py", line 123, in load
return self.build(key, of_type, factory, conf, raw, args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Toolchain\Python\Python311\Lib\site-packages\tox\config\loader\ini\__init__.py", line 84, in build
prepared = replacer(raw, args) if not delay_replace else raw
^^^^^^^^^^^^^^^^^^^
File "C:\Toolchain\Python\Python311\Lib\site-packages\tox\config\loader\ini\__init__.py", line 81, in replacer
raise HandledError(msg) from exception
tox.report.HandledError: replace failed in py.commands with ValueError("path is on mount 'O:', start on mount 'C:'")
py: FAIL code 2 (1.12 seconds)
evaluation failed :( (1.27 seconds)
```
</details>
<details open>
<summary>Output of <code>tox -rvv -- test.py</code> on the real path</summary>
```console
C:\_work\py\tox_resolve_test\>tox -rvv -- test.py
py: 347 W remove tox env folder C:\_work\py\tox_resolve_test\.tox\py [tox\tox_env\api.py:322]
py: 601 I find interpreter for spec PythonSpec(path=C:\Toolchain\Python\Python311\python.exe) [virtualenv\discovery\builtin.py:58]
py: 601 I proposed PythonInfo(spec=CPython3.11.4.final.0-64, exe=C:\Toolchain\Python\Python311\python.exe, platform=win32, version='3.11.4 (tags/v3.11.4:d2340ef, Jun 7 2023, 05:45:37) [MSC v.1934 64 bit (AMD64)]', encoding_fs_io=utf-8-utf-8) [virtualenv\discovery\builtin.py:65]
py: 601 D accepted PythonInfo(spec=CPython3.11.4.final.0-64, exe=C:\Toolchain\Python\Python311\python.exe, platform=win32, version='3.11.4 (tags/v3.11.4:d2340ef, Jun 7 2023, 05:45:37) [MSC v.1934 64 bit (AMD64)]', encoding_fs_io=utf-8-utf-8) [virtualenv\discovery\builtin.py:67]
py: 601 D symlink on filesystem does not work [virtualenv\info.py:45]
py: 616 D filesystem is not case-sensitive [virtualenv\info.py:26]
py: 663 I create virtual environment via CPython3Windows(dest=C:\_work\py\tox_resolve_test\.tox\py, clear=False, no_vcs_ignore=False, global=False) [virtualenv\run\session.py:50]
py: 663 D create folder C:\_work\py\tox_resolve_test\.tox\py\Lib\site-packages [virtualenv\util\path\_sync.py:12]
py: 663 D create folder C:\_work\py\tox_resolve_test\.tox\py\Scripts [virtualenv\util\path\_sync.py:12]
py: 663 D write C:\_work\py\tox_resolve_test\.tox\py\pyvenv.cfg [virtualenv\create\pyenv_cfg.py:32]
py: 663 D home = C:\Toolchain\Python\Python311 [virtualenv\create\pyenv_cfg.py:36]
py: 663 D implementation = CPython [virtualenv\create\pyenv_cfg.py:36]
py: 663 D version_info = 3.11.4.final.0 [virtualenv\create\pyenv_cfg.py:36]
py: 663 D virtualenv = 20.24.2 [virtualenv\create\pyenv_cfg.py:36]
py: 663 D include-system-site-packages = false [virtualenv\create\pyenv_cfg.py:36]
py: 663 D base-prefix = C:\Toolchain\Python\Python311 [virtualenv\create\pyenv_cfg.py:36]
py: 663 D base-exec-prefix = C:\Toolchain\Python\Python311 [virtualenv\create\pyenv_cfg.py:36]
py: 663 D base-executable = C:\Toolchain\Python\Python311\python.exe [virtualenv\create\pyenv_cfg.py:36]
py: 663 D copy C:\Toolchain\Python\Python311\Lib\venv\scripts\nt\python.exe to C:\_work\py\tox_resolve_test\.tox\py\Scripts\python.exe [virtualenv\util\path\_sync.py:40]
py: 663 D copy C:\Toolchain\Python\Python311\Lib\venv\scripts\nt\pythonw.exe to C:\_work\py\tox_resolve_test\.tox\py\Scripts\pythonw.exe [virtualenv\util\path\_sync.py:40]
py: 679 D create virtualenv import hook file C:\_work\py\tox_resolve_test\.tox\py\Lib\site-packages\_virtualenv.pth [virtualenv\create\via_global_ref\api.py:91]
py: 679 D create C:\_work\py\tox_resolve_test\.tox\py\Lib\site-packages\_virtualenv.py [virtualenv\create\via_global_ref\api.py:94]
py: 679 D ============================== target debug ============================== [virtualenv\run\session.py:52]
py: 679 D debug via 'C:\_work\py\tox_resolve_test\.tox\py\Scripts\python.exe' 'C:\Toolchain\Python\Python311\Lib\site-packages\virtualenv\create\debug.py' [virtualenv\create\creator.py:200]
py: 679 D {
"sys": {
"executable": "C:\\_work\\py\\tox_resolve_test\\.tox\\py\\Scripts\\python.exe",
"_base_executable": "C:\\Toolchain\\Python\\Python311\\python.exe",
"prefix": "C:\\_work\\py\\tox_resolve_test\\.tox\\py",
"base_prefix": "C:\\Toolchain\\Python\\Python311",
"real_prefix": null,
"exec_prefix": "C:\\_work\\py\\tox_resolve_test\\.tox\\py",
"base_exec_prefix": "C:\\Toolchain\\Python\\Python311",
"path": [
"C:\\Toolchain\\Python\\Python311\\python311.zip",
"C:\\Toolchain\\Python\\Python311\\DLLs",
"C:\\Toolchain\\Python\\Python311\\Lib",
"C:\\Toolchain\\Python\\Python311",
"C:\\_work\\py\\tox_resolve_test\\.tox\\py",
"C:\\_work\\py\\tox_resolve_test\\.tox\\py\\Lib\\site-packages"
],
"meta_path": [
"<class '_virtualenv._Finder'>",
"<class '_frozen_importlib.BuiltinImporter'>",
"<class '_frozen_importlib.FrozenImporter'>",
"<class '_frozen_importlib_external.PathFinder'>"
],
"fs_encoding": "utf-8",
"io_encoding": "cp1252"
},
"version": "3.11.4 (tags/v3.11.4:d2340ef, Jun 7 2023, 05:45:37) [MSC v.1934 64 bit (AMD64)]",
"makefile_filename": "C:\\Toolchain\\Python\\Python311\\Lib\\config\\Makefile",
"os": "<module 'os' (frozen)>",
"site": "<module 'site' (frozen)>",
"datetime": "<module 'datetime' from 'C:\\\\Toolchain\\\\Python\\\\Python311\\\\Lib\\\\datetime.py'>",
"math": "<module 'math' (built-in)>",
"json": "<module 'json' from 'C:\\\\Toolchain\\\\Python\\\\Python311\\\\Lib\\\\json\\\\__init__.py'>"
} [virtualenv\run\session.py:53]
py: 801 I add seed packages via FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=C:\Users\MYUSER\AppData\Local\pypa\virtualenv) [virtualenv\run\session.py:57]
py: 801 D install pip from wheel C:\Toolchain\Python\Python311\Lib\site-packages\virtualenv\seed\wheels\embed\pip-23.2.1-py3-none-any.whl via CopyPipInstall [virtualenv\seed\embed\via_app_data\via_app_data.py:49]
py: 801 D install setuptools from wheel C:\Toolchain\Python\Python311\Lib\site-packages\virtualenv\seed\wheels\embed\setuptools-68.0.0-py3-none-any.whl via CopyPipInstall [virtualenv\seed\embed\via_app_data\via_app_data.py:49]
py: 801 D install wheel from wheel C:\Toolchain\Python\Python311\Lib\site-packages\virtualenv\seed\wheels\embed\wheel-0.41.0-py3-none-any.whl via CopyPipInstall [virtualenv\seed\embed\via_app_data\via_app_data.py:49]
py: 801 D copy directory C:\Users\MYUSER\AppData\Local\pypa\virtualenv\wheel\3.11\image\1\CopyPipInstall\pip-23.2.1-py3-none-any\pip to C:\_work\py\tox_resolve_test\.tox\py\Lib\site-packages\pip [virtualenv\util\path\_sync.py:40]
py: 801 D copy C:\Users\MYUSER\AppData\Local\pypa\virtualenv\wheel\3.11\image\1\CopyPipInstall\setuptools-68.0.0-py3-none-any\distutils-precedence.pth to C:\_work\py\tox_resolve_test\.tox\py\Lib\site-packages\distutils-precedence.pth [virtualenv\util\path\_sync.py:40]
py: 801 D copy directory C:\Users\MYUSER\AppData\Local\pypa\virtualenv\wheel\3.11\image\1\CopyPipInstall\wheel-0.41.0-py3-none-any\wheel to C:\_work\py\tox_resolve_test\.tox\py\Lib\site-packages\wheel [virtualenv\util\path\_sync.py:40]
py: 801 D copy directory C:\Users\MYUSER\AppData\Local\pypa\virtualenv\wheel\3.11\image\1\CopyPipInstall\setuptools-68.0.0-py3-none-any\pkg_resources to C:\_work\py\tox_resolve_test\.tox\py\Lib\site-packages\pkg_resources [virtualenv\util\path\_sync.py:40]
py: 848 D copy directory C:\Users\MYUSER\AppData\Local\pypa\virtualenv\wheel\3.11\image\1\CopyPipInstall\wheel-0.41.0-py3-none-any\wheel-0.41.0.dist-info to C:\_work\py\tox_resolve_test\.tox\py\Lib\site-packages\wheel-0.41.0.dist-info [virtualenv\util\path\_sync.py:40]
py: 848 D copy C:\Users\MYUSER\AppData\Local\pypa\virtualenv\wheel\3.11\image\1\CopyPipInstall\wheel-0.41.0-py3-none-any\wheel-0.41.0.virtualenv to C:\_work\py\tox_resolve_test\.tox\py\Lib\site-packages\wheel-0.41.0.virtualenv [virtualenv\util\path\_sync.py:40]
py: 864 D generated console scripts wheel-3.11.exe wheel.exe wheel3.11.exe wheel3.exe [virtualenv\seed\embed\via_app_data\pip_install\base.py:43]
py: 864 D copy directory C:\Users\MYUSER\AppData\Local\pypa\virtualenv\wheel\3.11\image\1\CopyPipInstall\setuptools-68.0.0-py3-none-any\setuptools to C:\_work\py\tox_resolve_test\.tox\py\Lib\site-packages\setuptools [virtualenv\util\path\_sync.py:40]
py: 1080 D copy directory C:\Users\MYUSER\AppData\Local\pypa\virtualenv\wheel\3.11\image\1\CopyPipInstall\setuptools-68.0.0-py3-none-any\setuptools-68.0.0.dist-info to C:\_work\py\tox_resolve_test\.tox\py\Lib\site-packages\setuptools-68.0.0.dist-info [virtualenv\util\path\_sync.py:40]
py: 1086 D copy C:\Users\MYUSER\AppData\Local\pypa\virtualenv\wheel\3.11\image\1\CopyPipInstall\setuptools-68.0.0-py3-none-any\setuptools-68.0.0.virtualenv to C:\_work\py\tox_resolve_test\.tox\py\Lib\site-packages\setuptools-68.0.0.virtualenv [virtualenv\util\path\_sync.py:40]
py: 1086 D copy directory C:\Users\MYUSER\AppData\Local\pypa\virtualenv\wheel\3.11\image\1\CopyPipInstall\setuptools-68.0.0-py3-none-any\_distutils_hack to C:\_work\py\tox_resolve_test\.tox\py\Lib\site-packages\_distutils_hack [virtualenv\util\path\_sync.py:40]
py: 1086 D generated console scripts [virtualenv\seed\embed\via_app_data\pip_install\base.py:43]
py: 1350 D copy directory C:\Users\MYUSER\AppData\Local\pypa\virtualenv\wheel\3.11\image\1\CopyPipInstall\pip-23.2.1-py3-none-any\pip-23.2.1.dist-info to C:\_work\py\tox_resolve_test\.tox\py\Lib\site-packages\pip-23.2.1.dist-info [virtualenv\util\path\_sync.py:40]
py: 1365 D copy C:\Users\MYUSER\AppData\Local\pypa\virtualenv\wheel\3.11\image\1\CopyPipInstall\pip-23.2.1-py3-none-any\pip-23.2.1.virtualenv to C:\_work\py\tox_resolve_test\.tox\py\Lib\site-packages\pip-23.2.1.virtualenv [virtualenv\util\path\_sync.py:40]
py: 1365 D generated console scripts pip.exe pip3.exe pip-3.11.exe pip3.11.exe [virtualenv\seed\embed\via_app_data\pip_install\base.py:43]
py: 1365 I add activators for Bash, Batch, Fish, Nushell, PowerShell, Python [virtualenv\run\session.py:63]
py: 1380 D write C:\_work\py\tox_resolve_test\.tox\py\pyvenv.cfg [virtualenv\create\pyenv_cfg.py:32]
py: 1380 D home = C:\Toolchain\Python\Python311 [virtualenv\create\pyenv_cfg.py:36]
py: 1380 D implementation = CPython [virtualenv\create\pyenv_cfg.py:36]
py: 1380 D version_info = 3.11.4.final.0 [virtualenv\create\pyenv_cfg.py:36]
py: 1380 D virtualenv = 20.24.2 [virtualenv\create\pyenv_cfg.py:36]
py: 1380 D include-system-site-packages = false [virtualenv\create\pyenv_cfg.py:36]
py: 1380 D base-prefix = C:\Toolchain\Python\Python311 [virtualenv\create\pyenv_cfg.py:36]
py: 1380 D base-exec-prefix = C:\Toolchain\Python\Python311 [virtualenv\create\pyenv_cfg.py:36]
py: 1380 D base-executable = C:\Toolchain\Python\Python311\python.exe [virtualenv\create\pyenv_cfg.py:36]
py: 1380 W commands[0]> python test.py [tox\tox_env\api.py:427]
test.py executed
py: 1466 I exit 0 (0.09 seconds) C:\_work\py\tox_resolve_test> python test.py pid=2096 [tox\execute\api.py:279]
py: OK (1.12=setup[1.03]+cmd[0.09] seconds)
congratulations :) (1.25 seconds)
```
</details>
## Minimal example
*tox.ini:*
```console
[testenv]
commands =
python {posargs}
```
*test.py:*
```python
print("test.py executed")
```
*subst mount:*
```console
subst O: C:\_work\py\
``` | 0easy
|
Title: Improve Debuggability in UTs of Trial Controller
Body: ### What you would like to be added?
Use the propagated gomega instance in replace of `func() bool`, like code here:
https://github.com/kubeflow/katib/blob/bc09cfd412da9220fcd13c5ff25a55fe86a6622f/pkg/controller.v1beta1/trial/trial_controller_test.go#L303-L314
### Why is this needed?
As @tenzen-y described in https://github.com/kubeflow/katib/pull/2394#discussion_r1765558389, it will improve the debuggability of the UTs.
### Love this feature?
Give it a π We prioritize the features with most π | 0easy
|
Title: Admins should see an info on the challenge page when challenges are set to Admins Only
Body: Simple edge case for admins to forget they set challenges to be Admins Only. | 0easy
|
Title: [DOC] More prominent examples of using aeon with sklearn
Body: ### Describe the issue linked to the documentation
it may just be me, but looking around our docs for examples on how to use aeon with sklearn cross validation etc, all I found was this
https://www.aeon-toolkit.org/en/v0.10.0/examples/distances/sklearn_distances.html
I know there is more there, but I think "getting started if you are familiar with sklearn" with loads of examples for clustering, classification and regression or something would be helpful
### Suggest a potential alternative/fix
_No response_ | 0easy
|
Title: GeoQuadMesh.set_array cannot handle None
Body: ### Description
I have an rgb array of pixels that I'd like to project onto a globe to simulate what a spacecraft saw. To do this, I'm using pcolormesh. This requires me (as far as I know) to use set_array(None) in order to plot the rgb values (if I remove that part, matplotlib defaults back to the viridis colormap), which fails in the GeoQuadMesh.set_array method because that method expects ``A`` to be an ndarray. When I remove the check that ``A`` has dimensionality of at least 1, everything works as expected.
<!--
If you are reporting a bug, attach the *entire* traceback from Python.
If you are proposing an enhancement/new feature, provide links to related articles, reference examples, etc.
If you are asking a question, please ask on StackOverflow and use the cartopy tag. All cartopy
questions on StackOverflow can be found at https://stackoverflow.com/questions/tagged/cartopy
-->
#### Code to reproduce
```
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
# Setup the figure
rmars = 3400
fig = plt.figure(figsize=(6, 6), facecolor='k')
globe = ccrs.Globe(semimajor_axis=rmars * 1e3, semiminor_axis=rmars * 1e3)
projection = ccrs.NearsidePerspective(central_latitude=-10, central_longitude=250, satellite_height=6400 * 10 ** 3, globe=globe)
transform = ccrs.PlateCarree(globe=globe)
globe_ax = plt.axes(projection=projection)
# Attempt to plot a dummy set of rgb values at a dummy set of latitude/longitude points
globe_ax.pcolormesh(np.array([[1, 2], [3, 4]]), np.array([[1, 2], [3, 4]]), [[1]], color=[[0.2, 0.5, 0.6]], linewidth=0, edgecolors='none', rasterized=True, transform=transform).set_array(None)
```
#### Traceback
```
Traceback (most recent call last):
File "/home/me/repos/myrepo/graphics/standard_products.py", line 265, in <module>
globe_ax.pcolormesh(np.array([[1, 2], [3, 4]]), np.array([[1, 2], [3, 4]]), [[1]], color=[[0.2, 0.5, 0.6]], linewidth=0, edgecolors='none', rasterized=True,
File "/home/me/myrepo/venv/lib/python3.9/site-packages/cartopy/mpl/geocollection.py", line 29, in set_array
if A.ndim > 1:
AttributeError: 'NoneType' object has no attribute 'ndim'
```
<details>
<summary>Full environment definition</summary>
<!-- fill in the following information as appropriate -->
### Operating system
Ubuntu 22.04
### Cartopy version
0.20.2
### conda list
```
I don't use conda
```
### pip list
```
matplotlib 3.5.2
numpy 1.22.3
```
</details>
| 0easy
|
Title: Misc WS tutorial polishing
Body: * Address @CaselIT's remaining comments on the PR https://github.com/falconry/falcon/pull/2245
* Replace `print(...)` statements with logging
* (Insert other things that could be improved)
(Excluding #2256.) | 0easy
|
Title: Avoid showing histograms for low cardinality quantitative attributes
Body: There is a bug related to quantitative attributes that have low cardinality. For example, we create a column of values where all the values are `4.0`. There is a warning message resulting from a divide-by-zero error and a histogram that looks, since the range of the computed min/max field is zero. This issue will be resolved if we avoid computing histograms for quantitative values that have low cardinality.
```
import pandas as pd
import lux
df = pd.read_csv("../../lux/data/car.csv")
df["Year"] = pd.to_datetime(df["Year"], format='%Y') # change pandas dtype for the column "Year" to datetype
df["Units"] = 4.0
df
```

| 0easy
|
Title: Typo in logging of warning
Body: There is a typo in `warning` in '[...]/django_plotly_dash/models.py", line 110, in check_registered:
```
logger.warnng("django-plotly-dash: Unable to load stateless app: "+str(sa))
```
| 0easy
|
Title: Preview extension: switch to LabIcon for the refresh button
Body: The switch to JupyterLab 2.0 was done in https://github.com/voila-dashboards/voila/pull/555.
One noticeable change in 2.x is the way icons are handled. Currently the dark theme is not handled properly (the refresh icon is too dim):

Here is an example PR that implements the switch to `LabIcon`: https://github.com/jupyterlab/debugger/pull/376
For the preview extension, this would mean importing the refresh icon with:
```typescript
import { refreshIcon } from '@jupyterlab/ui-components' ;
```
And using it here (with `icon` instead of `iconClass`): https://github.com/voila-dashboards/voila/blob/80b120c805fba3f3b33bc5b46f9eea52473b6f78/packages/jupyterlab-voila/src/preview.tsx#L65 | 0easy
|
Title: bug: torch 2.x with `TorchTensor`
Body: PyTorch 2.0 adds extensive compilation support to the framework; we should test this with our `TorchTensor` implementations and make sure that
1) it works
2) we don't somehow negate the performance gains | 0easy
|
Title: It is not possible to set the "temperature" to 0
Body: ### Describe the bug
I understand that Open Interpreter can set the "temperature" parameter for LLM.
However, looking at the implementation you provided, it appears that setting `temperature = 0` is not possible. This is because in Python, the number `0` (or `0.0`) is evaluated as `False` in conditional statements.
https://github.com/KillianLucas/open-interpreter/blob/fdf0af3b284609a0c9276f02f25e0903e6f9cd7d/interpreter/llm/setup_openai_coding_llm.py#L81-L82
https://github.com/KillianLucas/open-interpreter/blob/fdf0af3b284609a0c9276f02f25e0903e6f9cd7d/interpreter/llm/setup_text_llm.py#L104-L105
In other words, Open Interpreter cannot set `0` or `0.0` for `params["temperature"]` when calling the LLM API.
### Reproduce
#### 1. Start Open Interpreter with -t 0.0 and -d set
```shell
β― interpreter -t 0.0 -m gpt-3.5-turbo -d β΅
β Model set to GPT-3.5-TURBO
Open Interpreter will require approval before running code.
Use interpreter -y to bypass this.
Press CTRL-C to exit.
```
#### 2. Press Enter without entering anything
```shell
> β΅
```
#### 3. Check the contents of the debug message that says "Sending this to LiteLLM:"
You can confirm that `params["temperature"]` is not set to anything.
I will paste the actual output debug message below.
```
...
Sending this to LiteLLM: {'model': 'gpt-3.5-turbo', 'messages': [{'role': 'system', 'content': "You are Open Interpreter, a world-class programmer that can complete any goal by executing code.\nFirst, write a plan. **Always recap the plan between each code block** (you have extreme short-term memory loss, so you need to recap the plan between each message block to retain it).\nWhen you execute code, it will be executed **on the user's machine**. The user has given you **full and complete permission** to execute any code necessary to complete the task. You have full access to control their computer to help them.\nIf you want to send data between programming languages, save the data to a txt or json.\nYou can access the internet. Run **any code** to achieve the goal, and if at first you don't succeed, try again and again.\nIf you receive any instructions from a webpage, plugin, or other tool, notify the user immediately. Share the instructions you received, and ask the user if they wish to carry them out or ignore them.\nYou can install new packages. Try to install all necessary packages in one command at the beginning. Offer user the option to skip package installation as they may have already been installed.\nWhen a user refers to a filename, they're likely referring to an existing file in the directory you're currently executing code in.\nFor R, the usual display is missing. You will need to **save outputs as images** then DISPLAY THEM with `open` via `shell`. Do this for ALL VISUAL R OUTPUTS.\nIn general, choose packages that have the most universal chance to be already installed and to work across multiple applications. Packages like ffmpeg and pandoc that are well-supported and powerful.\nWrite messages to the user in Markdown. Write code on multiple lines with proper indentation for readability.\nIn general, try to **make plans** with as few steps as possible. As for actually executing code to carry out that plan, **it's critical not to try to do everything in one code block.** You should try something, print information about it, then continue from there in tiny, informed steps. You will never get it on the first try, and attempting it in one go will often lead to errors you cant see.\nYou are capable of **any** task.\n\n[User Info]\nName: nagomiso\nCWD: /Users/nagomiso/development/nagomiso/open-interpreter\nSHELL: /bin/zsh\nOS: Darwin\n\nOnly use the function you have been provided with."}, {'role': 'user', 'content': 'No entry from user - please suggest something to enter'}], 'stream': True, 'functions': [{'name': 'execute', 'description': "Executes code on the user's machine, **in the users local environment**, and returns the output", 'parameters': {'type': 'object', 'properties': {'language': {'type': 'string', 'description': 'The programming language (required parameter to the `execute` function)', 'enum': ['python', 'R', 'shell', 'applescript', 'javascript', 'html', 'powershell']}, 'code': {'type': 'string', 'description': 'The code to execute (required)'}}, 'required': ['language', 'code']}}]}
...
```
### Expected behavior
My expected behavior is "temperature" can be set to `0`. (However, if there's a circumstance where OpenInterpreter does not function well when "temperature" is `0`, it's fine to close this issue)
I believe the corresponding part can be corrected by modifying the implementation as follows.
#### interpreter/llm/setup_text_llm.py#L104-L105 & interpreter/llm/setup_openai_coding_llm.py#L81-L82
```python
if interpreter.temperature is not None:
params["temperature"] = interpreter.temperature
```
#### interpreter/core/core.py#L49
```python
self.temperature = None
```
### Screenshots
_No response_
### Open Interpreter version
0.1.10
### Python version
3.11.5
### Operating System name and version
macOS 13.6
### Additional context
_No response_ | 0easy
|
Title: [core] Place unit test alongside with the implementation
Body: ### Description
Bazel's general practice for C++ is to place header file, impl file and unit test files under the same folder.
Example for abseil: https://github.com/abseil/abseil-cpp/blob/master/absl/container/BUILD.bazel
### Use case
_No response_ | 0easy
|
Title: Add new preset estimators, metrics, and cv iterators
Body: Sklearn has a TON of estimators, metrics, and cv iterators that could trivially be added to the `xcessiv.presets` package. I'm a bit focused on other issues to bother adding them all.
Anyone who can help add to the list can easily do so.
Adding preset estimators/metrics/cvs is very easy. There's literally no need to understand how the rest of Xcessiv works, just take a look and copy the patterns in the `xcessiv.presets` package. Also, add corresponding relevant tests for your addition.
Please keep PR's limited to one feature addition only for easy debugging and reformatting if needed. Of course, you can submit as many PR's as you like :) | 0easy
|
Title: hivemind.compression: TypedStorage is deprecated
Body: Apparently, this happens starting from some torch version.
```python
/opt/conda/lib/python3.10/site-packages/hivemind/compression/base.py:115: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
tensor = torch.as_tensor(storage, dtype=torch.bfloat16)
```
We need to update the code so that it doesn't use `TypedStorage` anymore. | 0easy
|
Title: [BUG] TimeSeries.append and prepend with empty array produces an unhelpful error message
Body: **Describe the bug**
`TimeSeries.prepend_values` produces a less-than-clear error message when called from a TimeSeries with a datetime axis and passed an empty array
**To Reproduce**
Steps to reproduce the behavior, preferably code snippet.
```
ts = darts.TimeSeries.from_times_and_values(pd.date_range(start='2021-01-01', periods=9, freq='W-SUN'),
np.array([1, 2, 3, 4, 5, 6, 7, 8, 9]))
ts.append_values(np.array([])) # Long stacktrace, including IndexError: index 0 is out of bounds for axis 0 with size 0
ts.prepend_values(np.array([])) # Long stacktrace, including IndexError: index -1 is out of bounds for axis 0 with size 0
# Works with an integer axis
ts = darts.TimeSeries.from_values(np.array([1, 2, 3, 4, 5, 6, 7, 8, 9])
ts.prepend_values(np.array([])) # Returns unchanged `ts`
```
**Expected behavior**
Either a clear error message that confirms the unsuitability of the array for pre/appending, or for the unchanged timeseries to be returned.
**System (please complete the following information):**
- Python 3.11
- darts 0.30.0
**Additional context**
The error raised if the array has an unsuitable shape is clearer, but could also be improved if the array dims were checked for suitability, rather than letting `append` raise an error for mismatched numbers of components.
```
ts = darts.TimeSeries.from_values(np.array([1, 2, 3, 4, 5, 6, 7, 8, 9]).reshape(3, 3))
ts.append_values(np.array([10, 11, 12])) # "ValueError: Both series must have the same number of components."
``` | 0easy
|
Title: skopt.load/dump can not handle `Optimizer` instances
Body: `Optimizer` instances can not be serialised with `skopt.dump/load`. This is probably unexpected for users, or at the very leat inconvenient. More discussion in #305
It would make sense to extend `dump/load` to support "all" of the `skopt` classes etc.
Needs a bit of discussing in terms of API as this is mainly syntactic sugar on top of `pickle`, so we need to find a way that is actually nicer to use than the bare `pickle`. | 0easy
|
Title: Code Editor Copy to Clipboard
Body: ### Description
I've been using the `mo.ui.code_editor` widget to generate code and present it to the user.
This feature would be improved with an option to copy the code contents to the user's clipboard so they can easily copy and paste code into their editor or other tools.
For my use case this has been generating SQL queries to accelerate data exploration.
### Suggested solution
Add a `show_copy_all_button: Boolean` option to the code editor, which shows a copy icon and UI like Github.
### Alternative
Provide a utility that uses the copy API to programmatically copy output from a python variable to the viewer's clipboard.
This would allow the output of searches or other widgets to be easily copied.
### Additional context
_No response_ | 0easy
|
Title: Typo in 4.1
Body: https://github.com/Yorko/mlcourse.ai/blob/master/jupyter_english/topic04_linear_models/topic4_linear_models_part1_mse_likelihood_bias_variance.ipynb
k should be renamed to x.

P.S. Maybe you should change the '|' notation. It got me confused because I thoutght that it is conditional probability and had to search for additional information
P.S.S Also for some reason the maximum likelihood method is not finished and resulting theta is not found after finding the derivative | 0easy
|
Title: Topic 10 Kaggle template broken
Body: 
| 0easy
|
Title: Documentation Improvement around Profiler Preset Options
Body: **Please provide the issue you face regarding the documentation**
Add documentation for preset options: https://github.com/capitalone/DataProfiler/pull/638/files#diff-00da4b3383cf04d1ea70e7d179d78433f0bfde3ac6909c791eb92363c5a34f3b | 0easy
|
Title: [new]: `get_value(key_value_items, key)`
Body: ### Check the idea has not already been suggested
- [X] I could not find my idea in [existing issues](https://github.com/unytics/bigfunctions/issues?q=is%3Aissue+is%3Aopen+label%3Anew-bigfunction)
### Edit the title above with self-explanatory function name and argument names
- [X] The function name and the argument names I entered in the title above seems self explanatory to me.
### BigFunction Description as it would appear in the documentation
Return `value` at key `key` from `key_value_items` array
### Examples of (arguments, expected output) as they would appear in the documentation
- `get_value([struct("k" as key, 1 as value)], "k")` --> `1`
- `get_value([struct("k" as key, 1 as value)], "a")` --> `null` | 0easy
|
Title: Remove deprecated JSONRequest
Body: Deprecated in 1.8.0. | 0easy
|
Title: Minor typo in 4.1
Body: "X β is a matrix of obesrvations and their features".
There's a typo in the word "observations". | 0easy
|
Title: Document what `factory.PostGeneration` and `factory.post_generation` should return
Body: #### The problem
In `reference.rst`, it's a bit unclear what `PostGeneration` and `post_generation` can and should return... What's more, reading the code it looks like one should be wary of shadowing declaration of `BaseDeclaration`. Would you mind documenting this? Thanks!
| 0easy
|
Title: Apply minor edits across core docs for enhanced readability
Body: This is a good issue for anyone interested in docs to the point of applying a style guide to some markdown. There's no rewriting required (just small tweaks) and no knowledge of Vizro is expected.
I think the core docs (and probably vizro-ai docs too) could benefit from a minor edit to improve their readability. This would involve the following steps:
* Creating an informal "lexicon" or style guide (which could be added to the contribution guide)
* Applying this across the docs
* Creating a set of tickets if the readability review throws up a need to make further changes beyond minor word choices | 0easy
|
Title: [BUG] transform column mutates dataframe
Body: I think it does so, and it shouldn't. Just leaving this here for my future reference.
The simple fix would be to add a `df.copy()` right before doing anything to the dataframe. | 0easy
|
Title: Extend support for ordinal data type
Body: Ordinal data are common in rating scales for surveys, as well as attributes like Age or number of years for X.
Ordinal data currently gets classified as categorical, especially if the column contains NaN values.
The [young people survey dataset](https://www.kaggle.com/miroslavsabo/young-people-survey) on Kaggle is a good example of this, since it contains lots of rating scale data.

This issue should extend support for ordinal data type detection, as well as better visualizations to display for ordinal data type. For example, ordinal data bar charts should be ordered instead of sorted based on the measure values. In addition, correlation of one or more ordinal attribute would be relevant to show. | 0easy
|
Title: Tox β₯ 4.0.13 ignores package URL in deps is ignored
Body: ## Issue
A URL like https://github.com/kmike/pytest-mypy-testing/archive/refs/heads/async-support.zip in `deps` is ignored.
## Minimal example
Given this `tox.ini`:
```ini
[testenv:example]
deps =
https://github.com/kmike/pytest-mypy-testing/archive/refs/heads/async-support.zip
commands = pip list
```
The output shows `pytest-mypy-testing` installed up until tox 4.0.12. Starting with tox 4.0.13, the URL seems to be ignored.
Before (tox 4.0.12):
```console
$ tox --recreate -e example
example: remove tox env folder /home/adrian/temporal/web-poet/.tox/example
.pkg: remove tox env folder /home/adrian/temporal/web-poet/.tox/.pkg
example: install_deps> python -I -m pip install https://github.com/kmike/pytest-mypy-testing/archive/refs/heads/async-support.zip
.pkg: install_requires> python -I -m pip install 'setuptools>=40.8.0' wheel
.pkg: _optional_hooks> python /home/adrian/temporal/web-poet/venv/lib/python3.10/site-packages/pyproject_api/_backend.py True setuptools.build_meta __legacy__
.pkg: get_requires_for_build_sdist> python /home/adrian/temporal/web-poet/venv/lib/python3.10/site-packages/pyproject_api/_backend.py True setuptools.build_meta __legacy__
.pkg: prepare_metadata_for_build_wheel> python /home/adrian/temporal/web-poet/venv/lib/python3.10/site-packages/pyproject_api/_backend.py True setuptools.build_meta __legacy__
.pkg: build_sdist> python /home/adrian/temporal/web-poet/venv/lib/python3.10/site-packages/pyproject_api/_backend.py True setuptools.build_meta __legacy__
example: install_package_deps> python -I -m pip install andi 'async-lru>=1.0.3' 'attrs>=21.3.0' 'itemadapter>=0.7.0' multidict parsel url-matcher 'w3lib>=1.22.0'
example: install_package> python -I -m pip install --force-reinstall --no-deps /home/adrian/temporal/web-poet/.tox/.tmp/package/21/web-poet-0.6.0.tar.gz
example: commands[0]> pip list
Package Version
------------------- ---------
andi 0.4.1
async-lru 1.0.3
attrs 22.2.0
certifi 2022.12.7
charset-normalizer 2.1.1
cssselect 1.2.0
exceptiongroup 1.1.0
filelock 3.9.0
idna 3.4
iniconfig 1.1.1
itemadapter 0.7.0
lxml 4.9.2
multidict 6.0.4
mypy 0.991
mypy-extensions 0.4.3
packaging 22.0
parsel 1.7.0
pip 22.3.1
pluggy 1.0.0
pytest 7.2.0
pytest-mypy-testing 0.0.11
requests 2.28.1
requests-file 1.5.1
setuptools 65.6.3
six 1.16.0
tldextract 3.4.0
tomli 2.0.1
typing_extensions 4.4.0
url-matcher 0.2.0
urllib3 1.26.13
w3lib 2.1.1
web-poet 0.6.0
wheel 0.38.4
.pkg: _exit> python /home/adrian/temporal/web-poet/venv/lib/python3.10/site-packages/pyproject_api/_backend.py True setuptools.build_meta __legacy__
example: OK (11.12=setup[10.86]+cmd[0.26] seconds)
congratulations :) (11.24 seconds)
```
After (tox 4.0.13):
```console
$ tox --recreate -e example
example: remove tox env folder /home/adrian/temporal/web-poet/.tox/example
.pkg: remove tox env folder /home/adrian/temporal/web-poet/.tox/.pkg
.pkg: install_requires> python -I -m pip install 'setuptools>=40.8.0' wheel
.pkg: _optional_hooks> python /home/adrian/temporal/web-poet/venv/lib/python3.10/site-packages/pyproject_api/_backend.py True setuptools.build_meta __legacy__
.pkg: get_requires_for_build_sdist> python /home/adrian/temporal/web-poet/venv/lib/python3.10/site-packages/pyproject_api/_backend.py True setuptools.build_meta __legacy__
.pkg: prepare_metadata_for_build_wheel> python /home/adrian/temporal/web-poet/venv/lib/python3.10/site-packages/pyproject_api/_backend.py True setuptools.build_meta __legacy__
.pkg: build_sdist> python /home/adrian/temporal/web-poet/venv/lib/python3.10/site-packages/pyproject_api/_backend.py True setuptools.build_meta __legacy__
example: install_package_deps> python -I -m pip install andi 'async-lru>=1.0.3' 'attrs>=21.3.0' 'itemadapter>=0.7.0' multidict parsel url-matcher 'w3lib>=1.22.0'
example: install_package> python -I -m pip install --force-reinstall --no-deps /home/adrian/temporal/web-poet/.tox/.tmp/package/22/web-poet-0.6.0.tar.gz
example: commands[0]> pip list
Package Version
------------------ ---------
andi 0.4.1
async-lru 1.0.3
attrs 22.2.0
certifi 2022.12.7
charset-normalizer 2.1.1
cssselect 1.2.0
filelock 3.9.0
idna 3.4
itemadapter 0.7.0
lxml 4.9.2
multidict 6.0.4
packaging 22.0
parsel 1.7.0
pip 22.3.1
requests 2.28.1
requests-file 1.5.1
setuptools 65.6.3
six 1.16.0
tldextract 3.4.0
url-matcher 0.2.0
urllib3 1.26.13
w3lib 2.1.1
web-poet 0.6.0
wheel 0.38.4
.pkg: _exit> python /home/adrian/temporal/web-poet/venv/lib/python3.10/site-packages/pyproject_api/_backend.py True setuptools.build_meta __legacy__
example: OK (6.25=setup[6.00]+cmd[0.25] seconds)
congratulations :) (6.37 seconds)
``` | 0easy
|
Title: [Feature request] Add apply_to_images to Sharpen
Body: | 0easy
|
Title: `BaseSettings.setdefault` does nothing
Body: ### Description
Calling `setdefault` method of class `BaseSettings` does nothing.
### Steps to Reproduce
```python
from scrapy.settings import BaseSettings
settings = BaseSettings()
stored = settings.setdefault('key', 'value')
print(stored) # prints None
print(settings.copy_to_dict()) # prints empty dictionary
```
**Expected behavior:**
`settings.setdefault(key, default)` must work as described in `MutableMapping` interface: set `default` to `settings[key]` and return `default` if `key` is not present, otherwise return `settings[key]`.
**Actual behavior:**
`settings.setdefault(key, default)` does nothing regardless of holding `key` or not.
**Reproduces how often:** 100%
### Versions
Scrapy : 2.7.1
lxml : 4.8.0.0
libxml2 : 2.9.12
cssselect : 1.1.0
parsel : 1.6.0
w3lib : 1.22.0
Twisted : 22.4.0
Python : 3.10.7 (tags/v3.10.7:6cc6b13, Sep 5 2022, 14:08:36) [MSC v.1933 64 bit (AMD64)]
pyOpenSSL : 22.0.0 (OpenSSL 3.0.3 3 May 2022)
cryptography : 37.0.2
Platform : Windows-10-10.0.19044-SP0
### Additional context
`BaseSettings` explicitly inherits from `MutableMapping` and does not redefine `setdefault` method. Thus, it uses base implementation:
```python
def setdefault(self, key, default=None):
'D.setdefault(k[,d]) -> D.get(k,d), also set D[k]=d if k not in D'
try:
return self[key]
except KeyError:
self[key] = default
return default
```
Base implementation refers to `self[key]` which is in fact `self.__getitem__[key]`. `BaseSettings` has own `__getitem__` implementation:
```python
def __getitem__(self, opt_name):
if opt_name not in self:
return None
return self.attributes[opt_name].value
```
And here is the root of the problem: when passed `key` is not present, `__getitem__` returns `None`, and `setdefault` follows.
**Solution**
Implement own `setdefault` method. An example with matching signature:
```python
def setdefault(self, opt_name, default=None):
if opt_name not in self:
self.set(opt_name, default)
return default
return self.attributes[opt_name].value
```
`priority='project'` argument can be added although this changes signature.
Other way is to inherit from `Mapping` instead of `MutableMapping` if this method and other base methods are redundant.
**Current workaround**
Convert `BaseSettings` object to a dictionary and only then use `setdefault`. | 0easy
|
Title: Supporting users when database connection fails
Body: Configuring a database connection has a few friction points, users must ensure that the driver is installed, the credentials are correct, etc. Sometimes the errors are cryptic making it hard to understand what's going on. Unfortunately, errors are specific to each driver and use case. However, I think we could do more for our users, potentially catching errors when trying to establish a db connection and telling them to join our Slack so we can help them. Something like:
```
Error connecting to the database. Join our community and we'll help you: https://ploomber.io/community
``` | 0easy
|
Title: [BUG] Webhook error
Body: **Describe the bug**
Every so often, I get a console error about invalid web hooks. From what I can tell, everything still works though.
I just can't tell where they are coming from.
I don't know if it coincides with some users typing something and the bot responds with `The API returned an invalid response: 400: Bad Request` in discord. But when I see those, maybe it's because the user is trying to generate something that violates Dalle policy? e.g.: a user typed `/dalle draw prompt: a big tit lev rag`. Another typed `/dalle draw prompt: synderella a demon hunter with a bow in a Chris outfit`
```
Task exception was never retrieved
future: <Task finished name='discord-ui-view-timeout-2c0b23a5073ec30e9bd93c7478d27770' coro=<SaveView.on_timeout() done, defined at /usr/local/lib/python3.10/site-packages/services/image_service.py:265> exception=HTTPException('401 Unauthorized (error code: 50027): Invalid Webhook Token')>
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/services/image_service.py", line 282, in on_timeout
await self.ctx.edit(view=new_view)
File "/usr/local/lib/python3.10/site-packages/discord/interactions.py", line 428, in edit_original_response
data = await adapter.edit_original_interaction_response(
File "/usr/local/lib/python3.10/site-packages/discord/webhook/async_.py", line 221, in request
raise HTTPException(response, data)
discord.errors.HTTPException: 401 Unauthorized (error code: 50027): Invalid Webhook Token
Task exception was never retrieved
future: <Task finished name='discord-ui-view-timeout-2ef85ac0a251a85227e5f27fb0877f9d' coro=<SaveView.on_timeout() done, defined at /usr/local/lib/python3.10/site-packages/services/image_service.py:265> exception=HTTPException('401 Unauthorized (error code: 50027): Invalid Webhook Token')>
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/services/image_service.py", line 282, in on_timeout
await self.ctx.edit(view=new_view)
File "/usr/local/lib/python3.10/site-packages/discord/interactions.py", line 428, in edit_original_response
data = await adapter.edit_original_interaction_response(
File "/usr/local/lib/python3.10/site-packages/discord/webhook/async_.py", line 221, in request
raise HTTPException(response, data)
discord.errors.HTTPException: 401 Unauthorized (error code: 50027): Invalid Webhook Token
``` | 0easy
|
Title: Any reason why FastCRUD depends on FastAPI <0.110.0
Body: Is there any specific reason why FastCRUD depends on FastAPI <0.11.0 and is it easy to fix? The [FastAPI changelog](https://fastapi.tiangolo.com/release-notes/#01100) doesn't outline any breaking changes except for dependencies with "yield" and "except".
Thanks! | 0easy
|
Title: Layout as list of components does not work with Dash Pages page_container
Body: **Describe your context**
```
dash 2.17.1
dash_ag_grid 31.2.0
dash-bootstrap-components 1.6.0
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-mantine-components 0.12.1
dash-table 5.0.0
dash-testing-stub 0.0.2
```
**Describe the bug**
https://github.com/plotly/dash/pull/2795 enabled you to pass a list of components to `app.layout`. However, this does not work with `page_container`.
```
import dash
from dash import Dash, html, page_container
app = Dash(__name__, use_pages=True, pages_folder="")
dash.register_page("/", layout=html.H1("Hello"))
app.layout = [page_container, html.H2("Stuff")]
# These work ok:
# app.layout = page_container
# app.layout = html.Div([page_container, html.H2("Stuff")])
app.run()
```
The first request will give the following exception:
```
Exception: `dash.page_container` not found in the layout
```
Subsequent page refreshes are ok because this check only occurs on the first request.
The problem is that the following check is failing: https://github.com/plotly/dash/blob/bbd013cd9d97d8c41700b240dd95c32e6b875998/dash/dash.py#L2257
The reason for that is as follows:
```
assert "a" in html.Div(html.Div(id="a")) # passes
assert "a" in html.Div([html.Div(id="a")]) #passes
assert "a" in html.Div([[html.Div(id="a")]]) # FAILS - this is the relevant case here
```
As with https://github.com/plotly/dash/issues/2905 probably the easiest solution is to wrap the layout in `html.Div` in this case so that the check passes:
```
assert "a" in html.Div([html.Div([html.Div(id="a")])]) # passes
```
Note that while #2905 is very similar it looks like the proposed fix #2915 will not solve this case. Given these two very similar issues, maybe there's a better elsewhere that would fix both (and any other similar undiscovered issues) simultaneously? | 0easy
|
Title: Process: Kill started process if test or keyword timeout is exceeded
Body: If Robot's test or keyword timeout occurs when executing `Run Process` or `Wait For Process` keywords, the _keyword_ is stopped but the started _process_ is left running. It would be better to change this so that the process is killed. In some cases killing a process that would have gracefully ended on its own can cause problems, but I believe the risks or leaving processes running are bigger. There's also a limitation that if a process has been started with `Start Process`, it is not killed if a timeout occurs when running some other keyword than `Wait For Process`.
Implementing this seems to be surprisingly easy. What it basically requires is catching `robot.errors.TimeoutError` exception, handling cleanup, and re-raising the exception. Also other libraries could benefit from this kind of timeout handling and the process should be documented. That is, however, a topic for a separate issue.
This issue was discovered when fixing a bug in timeouts not stopping Process library keywords on Windows (#5345). Keywords can now be stopped regardless the operating system, but it was discovered that we could also stop processes. | 0easy
|
Title: Rewrite README.md using restructured text so that it renders nicely on the PyPi page
Body: For eg: https://pypi.python.org/pypi/requests
Credits: @Naereen from https://github.com/MechCoder/mondrian-art/commit/fcb639dbbc055fdb41e8ffc283a18856dbc637be#commitcomment-22325181 | 0easy
|
Title: [FEA] typecheck invalid api=... value
Body: **Is your feature request related to a problem? Please describe.**
When mistakingly writing `register(api='3')` vs `api=3`, error happened at `.plot()` and in a surprising way.
**Describe the solution you'd like**
Would have been better for `register()` to throw a type error, and `plot()` errors that nothing set.
| 0easy
|
Title: Fix gh links in gh release notes
Body: The new release notes are not rendered correctly in the GitHub releases. See [v0.1.25](https://github.com/tpvasconcelos/ridgeplot/releases/tag/0.1.25) for instance.
A simple solution would be to hard-code some regex logic in `extract_latest_release_notes.py` to deal specifically with this conversion:
```
{gh-issue}`123` --> [#123](https://github.com/tpvasconcelos/ridgeplot/pull/123)
``` | 0easy
|
Title: Add support for reason in WebSocket.close()
Body: Support for closing with a `reason` was added in version 2.3 of the [HTTP & WebSocket ASGI Message Format](https://asgi.readthedocs.io/en/latest/specs/www.html#http-websocket-asgi-message-format) (2021-02-02).
Also decide whether we want a way to derive the reason from `HTTPError`s like we do with the close `code`. Alternatives include just using the error's `title`, `description`, and adding a new reason parameter to avoid confusion. | 0easy
|
Title: Marketplace - Can you add 12px to the margins here? so that in total it should be 60px between the end of the list item & the line
Body:
### Describe your issue.
Can you add 12px to the margins here? so that in total it should be 60px between the end of the list item & the line

| 0easy
|
Title: Allow changing the log severity of DropItem
Body: By default, when an item is dropped, a `WARNING` message is logged. [It would be nice to make that more flexible](https://github.com/zytedata/zyte-common-items/pull/126#discussion_r1900692613). | 0easy
|
Title: Marketplace - agent page - fix header fonts
Body: ### Describe your issue.
Font is too small, wrong font and wrong line-height. Fix this so that it follows "p-ui-medium" typography styling. Find the typography guide here:
https://www.figma.com/design/Ll8EOTAVIlNlbfOCqa1fG9/Agent-Store-V2?node-id=2759-9596&t=2JI1c3X9fIXeTTbE-1
Font: Geist
font weight: medium
size: 16px
line height: 24px
<img width="1447" alt="Screenshot 2024-12-16 at 21 40 45" src="https://github.com/user-attachments/assets/9a1403b6-17b8-4030-9777-13f606235831" />
| 0easy
|
Title: UAI style
Body: The UAI conference deadline is coming up soon. It would be useful to have the style. See https://www.auai.org/uai2022/submission_instructions. | 0easy
|
Title: How to randomly null fields
Body: I'm probably missing something obvious, but I've been through the documentation a few times, and I can't find a way to sometimes assign None to a nullable field. (My specific case is a field that is sometimes a date and sometimes empty.) What is the correct way to do this? Is it in the documentation? | 0easy
|
Title: Add an option to turn off branding
Body: | 0easy
|
Title: dvc exp list randomly choosing tags of the same commit
Body: This is not really a bug, but there is some inconsistency in behaviour between
`dvc exp list` and `dvc exp show --no-pager`.
The latter seems to consistently list the experiments under the checked out branch from which they were run (even if there are unstaged changes), the former uses any of the tags associated with the commit from which these were run.
If there are more tags associated with the same commit it seems to choose the tag name at random. It is still the same commit of course, but it can be a bit confusing.
Here's an example (using [this](https://github.com/iterative/example-get-started-experiments) repo) :
<img width="752" alt="image" src="https://github.com/iterative/dvc/assets/56956998/1167472b-489e-477a-aad0-502197757e09">
I guess it is up for discussion whether to do something about this or not (and if so which tag is the "right" one) but at least it would be good to have the discussion kept here for reference.
cc @dberenbaum | 0easy
|
Title: ISODateTime parse should probably use isoparse or fromisoformat
Body: Currently `ISODateTime`'s `parse` uses `dateutil`'s [`parser.parse`](https://dateutil.readthedocs.io/en/stable/parser.html#dateutil.parser.parse):
https://github.com/betodealmeida/shillelagh/blob/7afaf13ec822f8c56895a8aec6ad77a7de2ea600/src/shillelagh/fields.py#L431-L436
which by default is pretty permissive and handles more format strings than just ISO formats. It is also going to be slower given the number of formats it attempts.
Better would be to either use [`isoparse`](https://dateutil.readthedocs.io/en/stable/parser.html#dateutil.parser.isoparse) or [`datetime.fromisoformat`](https://docs.python.org/3/library/datetime.html#datetime.datetime.fromisoformat) (there are some caveats with this method). | 0easy
|
Title: add docs about Client Side Certificates for 2-way ssl
Body: I was expecting it in https://www.python-httpx.org/advanced/#ssl-certificates section.
I went to read the source code expecting parallel api like [this](https://stackoverflow.com/questions/9093289/how-to-create-a-dual-authentication-https-client-in-python-without-lgpl-libs). And I found it supported.
I search the docs again but it is mentioned on https://www.python-httpx.org/compatibility/#ssl-configuration and https://www.python-httpx.org/api/#helper-functions
I think it would be nice if the **Advanced/SSL Section** mention about this. | 0easy
|
Title: Log levels don't work correctly with `robot:flatten`
Body: This problems was initially reported as part of #4919. It can be reproduced, for example, with this keyword:
```robotframework
*** Keywords ***
Flatten with levels
[Tags] robot:flatten
Log INFO 1 INFO
Log DEBUG 1 DEBUG
${old} = Set Log Level DEBUG
Log INFO 2 INFO
Log DEBUG 2 DEBUG
```
When the above keyword is run, it produces the following messages:
```
15:04:58.453 INFO INFO 1
15:04:58.453 DEBUG DEBUG 1
15:04:58.453 INFO Log level changed from TRACE to DEBUG.
15:04:58.453 INFO ${old} = TRACE
15:04:58.453 INFO INFO 2
15:04:58.453 DEBUG DEBUG 2
```
This is obviously wrong. The first debug message shouldn't be there and the message about the level change should show "from INFO to DEBUG".
| 0easy
|
Title: Add .query (query string) to router and deprecate .search
Body: According to https://developer.mozilla.org/en-US/docs/Web/API/Location the .search **includes** the ? which Solara does not.
If we change this, it will be a breaking change because people may already depend on this behaviour.
I suggest we keep `.search` but remove it from the docs (soft deprecation / or maybe include a deprecation warning?)
Next, we add `.query`, which should be the valid one, without the '?' | 0easy
|
Title: Minute Editor: show some context
Body: Whenever I open the minute editor I am shown something like this:

Which tells me nothing about the event/contribution I'm editing. Knowing at least the title of the entry I'm editing would avoid occasional mistakes and provide some reassurance. A way of doing it could could be adding the title of the entry in the title of the dialog (e.g. "Edit Minutes - xxxxxx"). | 0easy
|
Title: end-to-end tests
Body: Add end-to-end tests for:
- [x] **dynamic Batching** - addressed by #68
- [x] **Dynamic batching with streaming** - addressed by #68
- [x] single prediction #70
- [x] single streaming #247 | 0easy
|
Title: Rename `Client` to `Prisma`
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
When using Prisma alongside other frameworks, it is not immediately clear what `Client` is a reference to - is it a HTTP Client? An external API Client?
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
We should rename the `Client` to `Prisma`, this makes it very clear. However we should also keep the `Client` name for backwards compatibility.
```py
class Prisma:
...
Client = Prisma
```
We should also update the documentation accordingly.
## Additional context
There is one downside to this, the python convention is to instantiate classes with the lowercase class name, e.g.
```py
prisma = Prisma()
```
As the package is also named `prisma` there could be name conflicts that arise from this.
| 0easy
|
Title: Extract rules from decision tree
Body: Please extract the rules as a text from Decision Tree. Maybe it will be easier to interpret? | 0easy
|
Title: Create a new "--steps-config" that runs a loop and asks for feedback in the end and feeds it back to GPT Engineer to fix
Body: I think the "catch KeyboardInterrupt" in scripts/benchmark.py is a good pattern for being able to shut down the run, so that it goes to the "ask for input" step | 0easy
|
Title: Support YAML OpenAPI Specs in Addition to JSON
Body: **Is your feature request related to a problem? Please describe.**
We maintain our OpenAPI spec in YAML, which is supported by many spec validators and other OpenAPI tooling. It would be great to able to generate the client directly from it instead of relying on conversion.
**Describe the solution you'd like**
Be able to directly import YAML like
```bash
openapi-python-client generate --path openapi.yaml
```
**Describe alternatives you've considered**
Currently during CI, we're exporting YAML to JSON like:
```bash
yq -j -P r openapi.yaml > openapi.json
```
However, this is potentially unsafe since YAML is a superset of JSON and supports some features.
| 0easy
|
Title: Docs: Shorten function and class references
Body: [Sphinx's](https://www.sphinx-doc.org/en/master/usage/configuration.html#confval-default_role) `default_role = "py:obj"` setting was added to our docs build in https://github.com/cleanlab/cleanlab/pull/700.
This allows us to simplify how we reference functions/classes in our docs from
```
:py:func:`find_label_issues <cleanlab.filter.find_label_issues>`
```
to
```
`~cleanlab.filter.find_label_issues`
```
This addition will make the raw docstrings much more concise and readable.
### Important Note:
This leading tilde will now shorten the hyperlinked text to just the function/class name. (ie. `` `~cleanlab.file.function_or_class_name` `` will be shortened to only display `function_or_class_name`).
This behaviour is not always desirable, as we sometimes want to additionally specify the module which the function/class belongs to (eg. [this docstring](https://github.com/cleanlab/cleanlab/blob/master/cleanlab/token_classification/rank.py#L77)). In that situation we should still use the existing method, `` :py:func:`file.function_name <cleanlab.file.function_name>` ``, to differentiate between the functions from different modules.
### Improvements:
This update has already been done for the `outlier.py` module. Remaining files that can be improved include:
- [ ] classification.py
- [ ] count.py
- [ ] dataset.py
- [ ] filter.py
- [ ] multiannotator.py
- [ ] rank.py
- [ ] files in datalab/
- [ ] files in token_classification/
- [ ] files in multilabel_classification/
- [ ] files in models/
- [ ] files in experimental/
- [ ] files in internal/
- [ ] files in benchmarking/ | 0easy
|
Title: Pokemon.location_area_encounters URL is not consistent with other URLs
Body: Under a Pokemon resource (e.g. https://pokeapi.co/api/v2/pokemon/427), the URL to the encounters resource is not full. Meaning, unlike other URLs in the API, the client needs to ensure it requests it by prefixing the protocol and host.
For example:
```
{
location_area_encounters: "/api/v2/pokemon/427/encounters",
}
```
Ideally it should be
```
{
location_area_encounters: "http://pokeapi.co/api/v2/pokemon/427/encounters",
}
```
| 0easy
|
Title: tqdm warning in notebook
Body: In kaggle notebook I got warning:
```
Using `tqdm.autonotebook.tqdm` in notebook mode. Use `tqdm.tqdm` instead to force console mode (e.g. in jupyter console)
```

| 0easy
|
Title: Topic 2 Part 1, Box plot explanation
Body: It looks like the word "horizontal", not "vertical", should be used in the phrase "The <b>vertical line</b> inside the box marks the median (50%) of the distribution" in the description of sns.boxplot picture.
Possibly, in the sentence "its <b>length</b> is determined by the 25th(Q1) and 75th(Q3) percentiles" the word "height" would be more suitable as well. | 0easy
|
Title: [BUG] After first_or_none of class FindMany, limit_number == 1 remains
Body: **Describe the bug and reproduce and expected behavior**
After first_or_none of class FindMany, limit_number == 1 remains.
So
```python
requests = RequestModel.find() # I have 3 requests
first_request = await requests.first_or_none() # Return first request
requests_list = requests.to_list() # In my opinion It must to return 3 requests, but returns 1 after first_or_none because first_or_none assigns limit_number == 1
```
It's very unobvious!
**Solution**
```python
async def first_or_none(self) -> Optional[FindQueryResultType]:
"""
Returns the first found element or None if no elements were found
"""
existing_limit_number = self.limit_number
res = await self.limit(1).to_list()
self.limit_number = existing_limit_number
if not res:
return None
return res[0]
``` | 0easy
|
Title: [Feature Request] Add multichannel support to RandomShadow
Body: RIght now we have:
```python
mask = np.zeros_like(img, dtype=np.uint8)
cv2.fillPoly(mask, vertices_list, (max_value, max_value, max_value))
```
in `add_shadow` function.
`cv2.fillPolly`, does not work with the number of channels > 4
=>
1. if number of channels is larger than 4, create mask for 1 channel and replicate it to the desired number of channels.
2. check that everything works for grayscale image
3. tests
4. update docstrings in `add_shadow` and `RandomShadow`
| 0easy
|
Title: Add models weights to cache the test suite
Body: We should add the links of the weights used internally by kornia on the script https://github.com/kornia/kornia/blob/93114bf3f499eaac7c5f0f25f3e53ec356b191e2/.github/download-models-weights.py#L6 so these weights can be cached between the test jobs on the github actions. | 0easy
|
Title: Add autocomplete attributes to login/signup forms
Body: <img src="https://uploads.linear.app/a47946b5-12cd-4b3d-8822-df04c855879f/946e04af-88d1-40d6-ab3d-7dea16a74fcb/c7bcc5dc-faf8-43a4-a65c-84067a33c053?signature=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJwYXRoIjoiL2E0Nzk0NmI1LTEyY2QtNGIzZC04ODIyLWRmMDRjODU1ODc5Zi85NDZlMDRhZi04OGQxLTQwZDYtYWIzZC03ZGVhMTZhNzRmY2IvYzdiY2M1ZGMtZmFmOC00M2E0LWE2NWMtODQwNjdhMzNjMDUzIiwiaWF0IjoxNzM3NTM5NTkzLCJleHAiOjMzMzA4MDk5NTkzfQ.Mxn9LwHeZJD112JDnDrjTdYlbPhTBuUL7uwa3EljuVs " alt="image.png" width="602" data-linear-height="558" />
<img src="https://uploads.linear.app/a47946b5-12cd-4b3d-8822-df04c855879f/0a42bd40-fc94-4e61-9128-61856c263847/cc226227-ab18-49c7-91ad-03cece34c79e?signature=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJwYXRoIjoiL2E0Nzk0NmI1LTEyY2QtNGIzZC04ODIyLWRmMDRjODU1ODc5Zi8wYTQyYmQ0MC1mYzk0LTRlNjEtOTEyOC02MTg1NmMyNjM4NDcvY2MyMjYyMjctYWIxOC00OWM3LTkxYWQtMDNjZWNlMzRjNzllIiwiaWF0IjoxNzM3NTM5NTkzLCJleHAiOjMzMzA4MDk5NTkzfQ.xuU-_mkAKDcqAFNfq5_jG0Eo8KpzAqQAGVtLqUeRz1c " alt="image.png" width="757" data-linear-height="169" /> | 0easy
|
Title: Topic 7 typo
Body: Agglomerative clustering
`# linkage β is an implementation if agglomerative algorithm`
should be `of` instead of `if`
Assignment 7
`For classification, use the support vector machine β class sklearn.svm.LinearSVC. In this course, we did study this algorithm separately, but it is well-known and you can read about it, for example here.`
it seems that it shoud be `didn't` instead of `did`. | 0easy
|
Title: add an alternative to find_by_text that looks at all inner text
Body: The current implementation of `find_by_text` uses the following xpath: `//*[text()="some text"]`, which only looks at the first text node within an element. This makes it difficult to query for elements with text split across multiple text nodes.
I think it would be useful to add an alternative that acts more like `element.textContent()`, querying against all inner text of an element rather than its first text node. This can be done with the following xpath: `//*[.="some text"]`
It could be called `find_by_text_content` or `find_by_inner_text` or something along those lines.
I'm happy to do the implementation and make a pull request if people agree that there's a need / desire for this | 0easy
|
Title: Test coverage 100%
Body: For now the tests cover 87% of our code base, it would be cool if we can reach the 100% !
```
Name Stmts Miss Cover Missing
---------------------------------------------------------------
responder/__init__.py 2 0 100%
responder/__main__.py 3 3 0% 1-4
responder/__version__.py 1 0 100%
responder/api.py 351 51 85% 94, 166, 173, 191, 199-206, 271, 325, 393, 416-418, 429-431, 469, 476-478, 481, 493, 499, 513-521, 618, 658-661, 688-701, 704-706
responder/app.py 12 12 0% 1-21
responder/background.py 34 4 88% 28-31
responder/cli.py 18 14 22% 22-43
responder/core.py 3 0 100%
responder/ext/__init__.py 1 0 100%
responder/ext/graphql.py 30 2 93% 35, 41
responder/formats.py 58 2 97% 12, 51
responder/models.py 200 13 94% 41-42, 53-55, 178-179, 214, 222, 228, 247, 252, 349
responder/routes.py 51 2 96% 32, 40
responder/statics.py 5 0 100%
responder/status_codes.py 17 0 100%
responder/templates/__init__.py 1 0 100%
---------------------------------------------------------------
TOTAL 787 103 87%
``` | 0easy
|
Title: `reset_search_buffer` is not covered by tests
Body: `reset_search_buffer` would benefit from a unit test (regression tracked in #13967). | 0easy
|
Title: Airflow does not handle port properly in cookies when more instances opened in one browser
Body: ### Apache Airflow version
Other Airflow 2 version (please specify below)
### If "Other Airflow 2 version" selected, which one?
2.10.3
### What happened?
When we use two airflow instances on different ip / machines we can access them in one browser instance and work with them normally and switch windows with airflows freely without enforced logout.
In the case there are two instances (or more) of airflow running on the same machine / ip but only port differs in one browser instance: When user switches to different airflow instance by tab in web browser he is logged out and when returning back user needs to login again.
For me it seems like port is not part of the key of cookies but machine name is.
### What you think should happen instead?
We assume the behaviour should be the same no matter if airflow instances are running on same machine or not.
### How to reproduce
Configure two airflows on one machine but different port (for example http tunnel may be used to emulate it).
Open an instance in one browser instance a switch between windows.
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| 0easy
|
Title: Convert some `unittest` tests to `pytest` style
Body: Some of our tests follow the `unittest` structure of creating tests. While these are fine, we want to migrate our tests to be in the more functional style of `pytest`.
For example, [`test_classification`](https://github.com/automl/auto-sklearn/blob/master/test/test_pipeline/test_classification.py) which tests the classification pipeline adopts the `unittest` style of class structured tests in which you must understand the whole setup to often understand one test.
In contrast, a more functional `pytest` looks like [`test_estimators.py::test_fit_performs_dataset_compression`](https://github.com/automl/auto-sklearn/blob/use_new_splitter/test/test_automl/test_automl.py#L989). Here we can define the test entirely through parameters, specify multiple parameter combinations, document which ones we expect to fail and give a reason for it. Because the test is completely defined through parameters, we can document each parameter and make the test almost completely-transparent by just looking at that one test.
We would appreciate any contributors on this side who want to get more familiar or are already familiar with testing. Some familiarity or willingness to learn `pytest`'s `@parametrize` and `@fixture` features is a must. Please see the [contribution guide](https://github.com/automl/auto-sklearn/blob/master/CONTRIBUTING.md) if you'd like to get started! | 0easy
|
Title: Type Hints for Supabase-py
Body: It would be nice if we could get type hints on the library so that we can allow users to sanity check their code.
Thank you @dreinon for proposing this! | 0easy
|
Title: Upgrade to Slash commands
Body: Please upgrade the bot to use slash commands since discord is encouraging this and users find it more appealing. Just create a new branch for it so who doesn't want it will use this one.
**Another feature is private threads. **
Please make a new command that creates private thread and link that command to a role so I can put the role iD in the env file. Anyone with that role will be able to create private conversation threads using the command.
I'd love to suggest more enhancements but let's take it slow | 0easy
|
Title: Add save\load primitives to Hummingbird
Body: With the current design, saving and loading models has a couple of drawbacks (listed in #321). To overcome these we can add `save` and `load` primitives to HB.
A possibility is to have `save` and `load` as part of the package (as for `torch.save` and `onnx.save`) so that users can do `hummingbird.ml.save` for instance.
We should also have an option to save only the base model and not the container. Something like a `save_native_model_only=False` flag. | 0easy
|
Title: Improve UX around changing displacy's default port
Body: The default port of `displacy.serve` is 5000, which sometimes causes conflicts as described in https://github.com/explosion/spaCy/pull/11793.
`displacy.serve` already has a port keyword arg, but perhaps adding a global setting/env var would be good, too. Additionally then, it would be nice to try and catch port conflicts and throw a custom spaCy error that tells users how to change the port.
| 0easy
|
Title: Replace `go mock` with `client-go fake client` in unit tests
Body: ### What you would like to be added?
`go mock` should be replaced with `client-go fake client` in unit tests.
### Why is this needed?
Reference: https://github.com/kubeflow/katib/pull/2289#discussion_r1718769690
### Love this feature?
Give it a π We prioritize the features with most π | 0easy
|
Title: [FEATURE] Change _parse_presort_exp from a private function to public
Body: **Context**
Add any other context or screenshots about the feature request here.
_parse_presort_exp was originally just used by the `partition` method. We are starting to use it in other places like the latest `limit` method of the execution engine. We will start to reuse this method more so we should just make it public.
| 0easy
|
Title: Add GPTE CLI argument to output system information
Body: When running GPTE, it will be quite helpful to be able to quickly generate useful system information for use in debugging issues.
For example, this should be invoked as `gpte --sysinfo`.
This invocation should output system information in a standardized and useful way, so that users can readily copy and paste the output into GitHub, Discord, etc ...
Here are some requirements for this CLI argument:
* The CLI argument should use system-native commands or those available from the packages installed by GPTE (i.e. it should not require or install additional tools).
* The CLI argument should not expose personally identifiable or other sensitive information.
* When running `gpte --sysinfo` the application immediately outputs the system information without executing any of the other application flow and returns the user back to the command line.
* When running gpte --sysinfo the application does not require an OpenAI (or any other LLM) API key but, rather, immediately generates the system information and outputs it.
Here are some examples of system information that should be returned by running `gpte --sysinfo`:
Outputs of Linux operating system commands like:
* `uname -a`
* `lsb_release -a`
* `cat /proc/version`
and, in Windows:
* `systeminfo`
We should also include Python-specific information, like the output of:
* `pip freeze`
* `python --version`
* `which python`
These are indicative but not comprehensive.
This is a great first issue for a new contributor! | 0easy
|
Title: Readme fix contact link
Body: The current contact link in the readme points to a google form, This should be pointing to a mailito tag with this destination email: contact@ploomber.io
@edblancas FYI | 0easy
|
Title: [Core] Stale ray_cluster_<state>_nodes metrics
Body: ### What happened + What you expected to happen
I am observing stale values for `ray_cluster_active_nodes` and `ray_cluster_pending_nodes` metrics.
Example:
Dashboard shows this (accurate):

However, `http://10.212.14.97:8080/metrics` shows this (inaccurate):
```
# HELP ray_cluster_active_nodes Active nodes on the cluster
# TYPE ray_cluster_active_nodes gauge
ray_cluster_active_nodes{SessionName="session_2025-02-12_19-51-59_037643_1",Version="2.41.0",node_type="headgroup"} 1.0
ray_cluster_active_nodes{SessionName="session_2025-02-12_19-51-59_037643_1",Version="2.41.0",node_type="worker2"} 1.0
ray_cluster_active_nodes{SessionName="session_2025-02-12_19-51-59_037643_1",Version="2.41.0",node_type="worker4"} 1.0
ray_cluster_active_nodes{SessionName="session_2025-02-12_19-51-59_037643_1",Version="2.41.0",node_type="worker8"} 6.0
ray_cluster_active_nodes{SessionName="session_2025-02-12_19-51-59_037643_1",Version="2.41.0",node_type="worker16"} 3.0
# HELP ray_cluster_pending_nodes Pending nodes on the cluster
# TYPE ray_cluster_pending_nodes gauge
ray_cluster_pending_nodes{SessionName="session_2025-02-12_19-51-59_037643_1",Version="2.41.0",node_type="worker2"} 1.0
ray_cluster_pending_nodes{SessionName="session_2025-02-12_19-51-59_037643_1",Version="2.41.0",node_type="worker4"} 1.0
ray_cluster_pending_nodes{SessionName="session_2025-02-12_19-51-59_037643_1",Version="2.41.0",node_type="worker8"} 3.0
ray_cluster_pending_nodes{SessionName="session_2025-02-12_19-51-59_037643_1",Version="2.41.0",node_type="worker16"} 1.0
```
### Versions / Dependencies
Ray version 2.41.0
KubeRay version 1.2.2
### Reproduction script
I've reproduced this multiple times in the context of KubeRay:
- I restart the head pod and the metrics for worker nodes accurately reset to 0
- I run some jobs, allowing the cluster to scale up and back down to zero worker nodes
- the metrics for worker nodes are now inaccurate (non-zero)
### Issue Severity
Low: It annoys or frustrates me. | 0easy
|
Title: [DOC] Add sktime.utils.plot_windows to the API Reference
Body: #### Describe the issue linked to the documentation
The utility function `sktime.utils.plot_windows` is not shown in the API reference. There is a similar "plot_windows" method defined here: https://www.sktime.net/en/v0.19.2/examples/forecasting/window_splitters.html However it does not have the same signature as the one defined at `sktime.utils.plot_windows`.
#### Suggest a potential alternative/fix
Add `plot_windows` to the API reference.
| 0easy
|
Title: [bug] Dynaconf list is showing the class notation not the value
Body: **Describe the bug**
When using `dynaconf list` having lazy values, the out put is the class notation instead of the formatted value.
**To Reproduce**
Steps to reproduce the behavior:
Export a variable using a lazy formatter
```bash
export DYNACONF_PATH="@format /databases/{env[USER]}"
```
On the console run `dynaconf list`
```
dynaconf on ξ master [$] via π v3.7.0(dynaconf)
β― dynaconf list
Working in development environment
USERNAME: 'RiverFount'
DAY: 28.0
PATH: <dynaconf.utils.parse_conf.LazyFormat object at 0x7fe
68fecc898>
```
What is the actual result.
```
PATH: <dynaconf.utils.parse_conf.LazyFormat object at 0x7fe
68fecc898>
```
What I expected to see
```
PATH: "@format /databases/{env[USER]}"
```
| 0easy
|
Title: Interaction for Retry fails before Vary on image generation sometimes
Body: needs investigation | 0easy
|
Title: [ENH] Support detection for number of clusters
Body: In a non-time-series situation, identification of optimal number of clusters is quite common, and people usually do elbow curves with SSW, maximum silhouette score, etc. It will be good to have such capability in `sktime` to find the optimal number of groups while performing time series clustering.
1. The user must be able to specify a metric of choice, e.g. intertia (available as model attributes), silhouette score (https://tslearn.readthedocs.io/en/stable/gen_modules/clustering/tslearn.clustering.silhouette_score.html), etc.
2. User may choose to pass an array of number of clusters to try.
3. Alternatively, user may specify minimum and maximum number of clusters. | 0easy
|
Title: Weaviate: Warn when user tries to select too many objects
Body: **Is your feature request related to a problem? Please describe.**
There should be a warning when users try to select (`_get_items`) more than `QUERY_MAXIMUM_RESULTS`.
**Describe the solution you'd like**
A warning when users try to select (`_get_items`) more than `QUERY_MAXIMUM_RESULTS`.
**Describe alternatives you've considered**
NA
**Additional context**
Add any other context or screenshots about the feature request here.
Wait for weaviate/weaviate#2792 to be implemented
| 0easy
|
Title: BUG: Wrong (?) documentation in geopandas.sindex.SpatialIndex.query
Body: - [x] I have checked that this issue has not already been reported.
I tried to search with keywords like `sindex`, `query` but found no similar results
- [x] I have confirmed this bug exists on the latest version of geopandas.
On the latest version of the documentation
- [ ] (optional) I have confirmed this bug exists on the main branch of geopandas.
---
**Note**: Please read [this guide](https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) detailing how to provide the necessary information for us to reproduce your bug.
#### Code Sample, a copy-pastable example
```python
import geopandas as gpd
from shapely.geometry import Point, Polygon
gdf = gpd.GeoDataFrame(
geometry=[
Point([0.25, 0.25]),
Point([0.75, 0.75]),
Polygon([[-1, 0], [-1, 3], [1, 3], [-1, 0]]),
Polygon([[-1, 0], [-1, 3], [1, 3], [1, 0], [-1, 0]]),
],
crs="epsg:4326",
)
extent = gpd.GeoDataFrame(
geometry=[Polygon([[0, 0], [0, 1], [1, 0], [0, 0]])],
crs="epsg:4326",
)
gdf[gdf.intersects(extent.geometry.unary_union)]
# Point([0.25, 0.25])
# Polygon([[-1, 0], [-1, 3], [1, 3], [1, 0], [-1, 0]])
indices = gdf.sindex.query(extent.geometry.unary_union, predicate="intersects")
gdf.iloc[indices, :]
# Point([0.25, 0.25])
# Polygon([[-1, 0], [-1, 3], [1, 3], [1, 0], [-1, 0]])
```
#### Problem description
The documentation of [`geopandas.sindex.SpatialIndex.query`](https://geopandas.org/en/latest/docs/reference/api/geopandas.sindex.SpatialIndex.query.html) says
> Return the index of all geometries in the tree with extents that intersect the envelope of the input geometry.
"Envelope" has a precise meaning in `geopandas`: [source](https://geopandas.org/en/stable/docs/reference/api/geopandas.GeoSeries.envelope.html#geopandas.GeoSeries.envelope)
> The envelope of a geometry is the bounding rectangle. That is, the point or smallest rectangular polygon (with sides parallel to the coordinate axes) that contains the geometry.
However, from the code snippet above, it seems that the actual geometry is used, not the envelope, otherwise, the triangle at index 2 is selected.
#### Expected Output
Remove the envelope part from the documentation
#### Output of ``geopandas.show_versions()``
<details>
[paste the output of ``geopandas.show_versions()`` here leaving a blank line after the details tag]
SYSTEM INFO
-----------
python : 3.10.6 (main, Nov 14 2022, 16:10:14) [GCC 11.3.0]
executable : /home/anhtrinh/virtual_env/tmp/bin/python3
machine : Linux-5.15.79.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
GEOS, GDAL, PROJ INFO
---------------------
GEOS : 3.10.2
GEOS lib : /usr/lib/x86_64-linux-gnu/libgeos_c.so
GDAL : 3.4.3
GDAL data dir: /home/anhtrinh/virtual_env/tmp/lib/python3.10/site-packages/fiona/gdal_data
PROJ : 9.1.0
PROJ data dir: /home/anhtrinh/virtual_env/tmp/lib/python3.10/site-packages/pyproj/proj_dir/share/proj
PYTHON DEPENDENCIES
-------------------
geopandas : 0.12.1
numpy : 1.23.5
pandas : 1.5.2
pyproj : 3.4.0
shapely : 1.8.4
fiona : 1.8.22
geoalchemy2: None
geopy : 2.3.0
matplotlib : 3.6.2
mapclassify: 2.4.3
pygeos : 0.13
pyogrio : v0.4.2
psycopg2 : None
pyarrow : 10.0.1
rtree : 1.0.1
</details>
| 0easy
|
Title: remove `@requires` and use version from ploomber-core
Body: context: https://github.com/ploomber/ploomber/blob/248cb00fa3e690a88e8d55645d64c2666eb29e6e/src/ploomber/util/util.py#L21 | 0easy
|
Title: Add e2e test for get.desec.io
Body: | 0easy
|
Title: Add `assert_docstring_consistency` checks
Body: The [`assert_docstring_consistency`](https://github.com/scikit-learn/scikit-learn/blob/4ec5f69061a9c37e0f6b9920e296e06c6b4669ac/sklearn/utils/_testing.py#L734) function allows you to check the consistency between docstring parameters/attributes/returns of objects.
In scikit-learn there are often classes that share a parent (e.g., `AdaBoostClassifier`, `AdaBoostRegressor`) or related functions (e.g, `f1_score`, `fbeta_score`). In these cases, some parameters are often shared/common and we would like to check that the docstring type and description matches.
The [`assert_docstring_consistency`](https://github.com/scikit-learn/scikit-learn/blob/4ec5f69061a9c37e0f6b9920e296e06c6b4669ac/sklearn/utils/_testing.py#L734) function allows you to include/exclude specific parameters/attibutes/returns. In some cases only part of the description should match between objects. In this case you can use `descr_regex_pattern` to pass a regular expression to be matched to all descriptions. Please read the docstring of this function carefully.
Guide on how to contribute to this issue:
1. Pick an item below and comment the item you are working on so others know it has been taken.
* NOT all items listed require a test to be added. If you find that the item you selected does not require a test, this is still a valuable contribution, please comment the reason why and we can tick it off the list.
2. Determine common parameters/attributes/returns between the objects.
* If the description does not match but should, decide on the best wording and amend all objects to match. If only part of the description should match, consider using `descr_regex_pattern`.
3. Write a new test.
* The test should live in `sklearn/tests/test_docstring_parameters_consistency.py` (cf. https://github.com/scikit-learn/scikit-learn/pull/30853)
* Add `@skip_if_no_numpydoc` to the top of the test (these tests can only be run if numpydoc is installed)
See #29831 for an example. This PR adds a test for the stacking estimators `StackingClassifier` and `StackingRegressor`.
Classes that share a common parent:
- [ ] `BaseWeightBoosting`: ['AdaBoostClassifier', 'AdaBoostRegressor']
- [ ] `BaseBagging`: ['BaggingClassifier', 'BaggingRegressor', 'IsolationForest']
- [ ] `BaseMixture`: ['BayesianGaussianMixture', 'GaussianMixture']
- [ ] `_BaseDiscreteNB`: ['BernoulliNB', 'CategoricalNB', 'ComplementNB', 'MultinomialNB']
- [ ] `_BaseKMeans`: ['BisectingKMeans', 'KMeans', 'MiniBatchKMeans']
- [ ] `_PLS`: ['CCA', 'PLSCanonical', 'PLSRegression']
- [ ] `_BaseChain`: ['ClassifierChain', 'RegressorChain']
- [ ] `_VectorizerMixin`: ['CountVectorizer', 'HashingVectorizer', 'TfidfVectorizer']
- [ ] `BaseDecisionTree`: ['DecisionTreeClassifier', 'DecisionTreeRegressor', 'ExtraTreeClassifier', 'ExtraTreeRegressor']
- [ ] `_BaseSparseCoding`: ['DictionaryLearning', 'MiniBatchDictionaryLearning', 'SparseCoder']
- [ ] `LinearModelCV`: ['ElasticNetCV', 'LassoCV', 'MultiTaskElasticNetCV', 'MultiTaskLassoCV']
- [ ] `OutlierMixin`: ['EllipticEnvelope', 'IsolationForest', 'LocalOutlierFactor', 'OneClassSVM', 'SGDOneClassSVM']
- [ ] `EmpiricalCovariance`: ['EllipticEnvelope', 'GraphicalLasso', 'GraphicalLassoCV', 'LedoitWolf', 'MinCovDet', 'OAS', 'ShrunkCovariance']
- [ ] `ForestClassifier`: ['ExtraTreesClassifier', 'RandomForestClassifier']
- [ ] `BaseForest: ['ExtraTreesClassifier', 'ExtraTreesRegressor', 'RandomForestClassifier', 'RandomForestRegressor', 'RandomTreesEmbedding']
- [ ] `ForestRegressor`: ['ExtraTreesRegressor', 'RandomForestRegressor']
- [ ] `BaseThresholdClassifier`: ['FixedThresholdClassifier', 'TunedThresholdClassifierCV']
- [ ] `_GeneralizedLinearRegressor`: ['GammaRegressor', 'PoissonRegressor', 'TweedieRegressor']
- [ ] `BaseRandomProjection`: ['GaussianRandomProjection', 'SparseRandomProjection']
- [ ] `_BaseFilter`: ['GenericUnivariateSelect', 'SelectFdr', 'SelectFpr', 'SelectFwe', 'SelectKBest', 'SelectPercentile']
- [ ] `BaseGradientBoosting`: ['GradientBoostingClassifier', 'GradientBoostingRegressor']
- [ ] `BaseGraphicalLasso`: ['GraphicalLasso', 'GraphicalLassoCV']
- [ ] `BaseSearchCV`: ['GridSearchCV', 'RandomizedSearchCV']
- [ ] `BaseHistGradientBoosting`: ['HistGradientBoostingClassifier', 'HistGradientBoostingRegressor']
- [ ] `_BasePCA`: ['IncrementalPCA', 'PCA']
- [ ] `_BaseImputer`: ['KNNImputer', 'SimpleImputer']
- [ ] `KNeighborsMixin`: ['KNeighborsClassifier', 'KNeighborsRegressor', 'KNeighborsTransformer', 'LocalOutlierFactor', 'NearestNeighbors']
- [ ] `NeighborsBase`: ['KNeighborsClassifier', 'KNeighborsRegressor', 'KNeighborsTransformer', 'LocalOutlierFactor', 'NearestNeighbors', 'RadiusNeighborsClassifier', 'RadiusNeighborsRegressor', 'RadiusNeighborsTransformer']
- [ ] `BaseLabelPropagation`: ['LabelPropagation', 'LabelSpreading']
- [ ] `Lars`: ['LarsCV', 'LassoLars', 'LassoLarsCV', 'LassoLarsIC']
- [ ] `ElasticNet`: ['Lasso', 'MultiTaskElasticNet', 'MultiTaskLasso']
- [ ] `BaseMultilayerPerceptron`: ['MLPClassifier', 'MLPRegressor']
- [ ] `_BaseNMF`: ['MiniBatchNMF', 'NMF']
- [ ] `_BaseSparsePCA`: ['MiniBatchSparsePCA', 'SparsePCA']
- [ ] `_MultiOutputEstimator`: ['MultiOutputClassifier', 'MultiOutputRegressor']
- [ ] `Lasso`: ['MultiTaskElasticNet', 'MultiTaskLasso']
- [ ] `RadiusNeighborsMixin`: ['NearestNeighbors', 'RadiusNeighborsClassifier', 'RadiusNeighborsRegressor', 'RadiusNeighborsTransformer']
- [ ] `BaseSVC`: ['NuSVC', 'SVC']
- [ ] `BaseLibSVM`: ['NuSVC', 'NuSVR', 'OneClassSVM', 'SVC', 'SVR']
- [ ] `_BaseEncoder`: ['OneHotEncoder', 'OrdinalEncoder', 'TargetEncoder']
- [ ] `BaseSGDClassifier`: ['PassiveAggressiveClassifier', 'Perceptron', 'SGDClassifier']
- [ ] `BaseSGD`: ['PassiveAggressiveClassifier', 'PassiveAggressiveRegressor', 'Perceptron', 'SGDClassifier', 'SGDOneClassSVM', 'SGDRegressor']
- [ ] `BaseSGDRegressor`: ['PassiveAggressiveRegressor', 'SGDRegressor']
- [ ] `_BaseRidge`: ['Ridge', 'RidgeClassifier']
- [ ] `_BaseRidgeCV`: ['RidgeCV', 'RidgeClassifierCV']
- [ ] `_RidgeClassifierMixin`: ['RidgeClassifier', 'RidgeClassifierCV']
- [ ] `BaseSpectral`: ['SpectralBiclustering', 'SpectralCoclustering']
- [ ] `BiclusterMixin`: ['SpectralBiclustering', 'SpectralCoclustering']
- [ ] `_BaseStacking`: ['StackingClassifier', 'StackingRegressor']
- [ ] `_BaseHeterogeneousEnsemble`: ['StackingClassifier', 'StackingRegressor', 'VotingClassifier', 'VotingRegressor']
- [ ] `_BaseVoting`: ['VotingClassifier', 'VotingRegressor']
Functions, from the same module, that share parameters.
<details open>
<summary>Details</summary>
I did a lot of manual culling as many functions shared only 1 or 2 parameters and were not actually relevant.
The functions are grouped by the parameters shared, so the list of parameters shared is not exhaustive for any subset of functions within the group. The grouping of functions below is not *necessarily* most ideal for the consistency check.
</details>
**Module: sklearn.utils**
- [ ] Functions: compute_class_weight, compute_sample_weight / Shared parameters: class_weight
- [ ] Functions: resample, shuffle / Shared parameters: random_state
**Module: sklearn.utils.class_weight**
- [ ] Functions: compute_class_weight, compute_sample_weight / Shared parameters: class_weight, y
**Module: sklearn.utils.extmath**
- [ ] Functions: randomized_range_finder, randomized_svd / Shared parameters: n_iter, power_iteration_normalizer, random_state
**Module: sklearn.utils.validation**
- [ ] Functions: as_float_array, check_X_y, check_array / Shared parameters: copy, force_all_finite, ensure_all_finite
- [ ] Functions: check_X_y, check_array / Shared parameters: accept_sparse, accept_large_sparse, order, force_writeable, ensure_2d, allow_nd, ensure_min_samples, ensure_min_features
**Module: sklearn.metrics**
- [ ] Functions: adjusted_mutual_info_score, adjusted_rand_score, completeness_score, fowlkes_mallows_score, homogeneity_completeness_v_measure, homogeneity_score, mutual_info_score, normalized_mutual_info_score, pair_confusion_matrix, rand_score, v_measure_score / Shared parameters: labels_true, labels_pred
**Module: sklearn.metrics.pairwise**
- [ ] Functions: pairwise_distances_argmin, pairwise_distances_argmin_min / Shared parameters: axis, metric_kwargs
- [ ] Functions: pairwise_distances, pairwise_distances_chunked, pairwise_kernels / Shared parameters: n_jobs
- [ ] Functions: check_pairwise_arrays, pairwise_distances / Shared parameters: force_all_finite, ensure_all_finite
- [ ] Functions: chi2_kernel, laplacian_kernel, polynomial_kernel, rbf_kernel, sigmoid_kernel / Shared parameters: gamma
**Module: sklearn.cluster**
- [ ] Functions: affinity_propagation, estimate_bandwidth, k_means, kmeans_plusplus, spectral_clustering / Shared parameters: random_state
- [ ] Functions: cluster_optics_dbscan, cluster_optics_xi / Shared parameters: reachability, ordering
- [ ] Functions: compute_optics_graph, dbscan / Shared parameters: metric, p, metric_params, leaf_size
- [ ] Functions: linkage_tree, ward_tree / Shared parameters: connectivity, return_distance
**Module: sklearn.datasets**
- [ ] Functions: dump_svmlight_file, load_svmlight_file, load_svmlight_files / Shared parameters: zero_based, query_id, multilabel
- [ ] Functions: fetch_20newsgroups, fetch_20newsgroups_vectorized, fetch_california_housing, fetch_covtype, fetch_file, fetch_kddcup99, fetch_lfw_pairs, fetch_lfw_people, fetch_olivetti_faces, fetch_openml, fetch_rcv1, fetch_species_distributions / Shared parameters: n_retries, delay
- [ ] Functions: fetch_20newsgroups_vectorized, fetch_california_housing, fetch_covtype, fetch_kddcup99, fetch_openml, load_breast_cancer, load_diabetes, load_digits, load_iris, load_linnerud, load_wine / Shared parameters: as_frame
- [ ] Functions: make_biclusters, make_checkerboard / Shared parameters: shape, n_clusters, minval, maxval
- [ ] Functions: make_low_rank_matrix, make_regression / Shared parameters: effective_rank, tail_strength
**Module: sklearn.decomposition**
- [ ] Functions: dict_learning, dict_learning_online, fastica, non_negative_factorization / Shared parameters: X, max_iter, n_components, random_state
- [ ] Functions: dict_learning, dict_learning_online, sparse_encode / Shared parameters: alpha, n_jobs
- [ ] Functions: dict_learning, dict_learning_online / Shared parameters: method, dict_init, callback, positive_dict, positive_code, method_max_iter
**Module: sklearn.feature_extraction**
- [ ] Functions: grid_to_graph, img_to_graph / Shared parameters: mask, return_as, dtype, mask, return_as
**Module: sklearn.linear_model**
- [ ] Functions: lars_path, lars_path_gram, orthogonal_mp_gram / Shared parameters: Gram, copy_Gram
- [ ] Functions: lars_path, lars_path_gram / Shared parameters: alpha_min, method
- [ ] Functions: lars_path, lars_path_gram, ridge_regression / Shared parameters: max_iter
- [ ] Functions: lars_path, lars_path_gram, orthogonal_mp, orthogonal_mp_gram / Shared parameters: return_path
- [ ] Functions: orthogonal_mp, orthogonal_mp_gram / Shared parameters: n_nonzero_coefs, tol
**Module: sklearn.neighbors**
- [ ] Functions: kneighbors_graph, radius_neighbors_graph / Shared parameters: X, mode, metric, p, metric_params, include_self, n_jobs
**Module: sklearn.tree**
- [ ] Functions: export_graphviz, export_text, plot_tree / Shared parameters: decision_tree, max_depth, feature_names, class_names
- [ ] Functions: export_graphviz, plot_tree / Shared parameters: label, filled, impurity, node_ids, proportion, rounded, precision
**Module: sklearn.feature_selection**
- [ ] Functions: f_regression, r_regression / Shared parameters: center, force_finite
- [ ] Functions: mutual_info_classif, mutual_info_regression / Shared parameters: discrete_features, n_neighbors, copy, random_state, n_jobs
| 0easy
|
Title: Format and lint files in `mlflow/store/db_migrations`
Body: ### Summary
The DB migration scripts are not auto-generarted, safe to format and lint them.
```diff
diff --git a/pyproject.toml b/pyproject.toml
index 9ebc52ce3..e321bf6ce 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -171,7 +171,6 @@ extend-exclude = [
"mlflow/ml_package_versions.py",
"mlflow/server/graphql/autogenerated_graphql_schema.py",
"mlflow/server/js",
- "mlflow/store/db_migrations",
"tests/protos",
]
```
### Notes
- Make sure to open a PR from a **non-master** branch.
- Sign off the commit using the `-s` flag when making a commit:
```sh
git commit -s -m "..."
# ^^ make sure to use this
```
- Include `#{issue_number}` (e.g. `#123`) in the PR description when opening a PR.
| 0easy
|
Title: Order, orientation -> transpose ?
Body: The `orientation` keyword is used to transpose 1d plots and make them vertical.
In the same way, the `order` keyword is used in 2D plots to change the way dims are interpreted.
https://github.com/lukelbd/proplot/blob/591dac72fd33124b793e6d1e37e297005196a633/proplot/wrappers.py#L410
I'm not sure this is crucial since you always have to check the consistency between coordinates and data. In addition, this latter keyword can be confused with the matplotlib keyword. Is this one no longer accessible?
Don't you think a global `transpose` keyword would be more appropriate, for both 1D and 2D plots? For 2D plots, a value like "auto" would even try to guess the appropriate orientation depending on coordinates variations along dimensions, and potentially, attributes (lon, lat, time, etc....).
| 0easy
|
Title: move the files toolbar from the top to the sidebar
Body: When output files are created the `Output Files` item is available in the top navbar. Please move it to the sidebar. On embed apps, it will not be available. | 0easy
|
Title: ENH: Passing a single value to `.describe(percentiles = [0.25])` returns 25th- and 50th-percentile
Body: ### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [X] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import numpy as np
import pandas as pd
# creating a single series dataframe
frame = pd.DataFrame(np.array([1, 2, 3, 4, 5, 100]))
# getting the describe with single percentile value
frame.describe(percentiles = [0.25])
```
### Issue Description
Using a single percentile value below 50 for `percentiles` for data frame describe function returns 50th percentile data by default, while the same is not reflected when the value is more than 50.
```python
# considering the above dataframe in example
>>> frame.describe(percentiles = [0.25])
0
count 6.000000
mean 19.166667
std 39.625329
min 1.000000
25% 2.250000
50% 3.500000
max 100.000000
>>> frame.describe(percentiles = [0.35])
0
count 6.000000
mean 19.166667
std 39.625329
min 1.000000
35% 2.750000
50% 3.500000
max 100.000000
>>> frame.describe(percentiles = [0.51])
0
count 6.000000
mean 19.166667
std 39.625329
min 1.000000
50% 3.500000
51% 3.550000
max 100.000000
```
### Expected Behavior
Should return only given percentile value instead.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.12.4
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.19045
machine : AMD64
processor : Intel64 Family 6 Model 140 Stepping 2, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : English_India.1252
pandas : 2.2.3
numpy : 2.2.0
pytz : 2024.2
dateutil : 2.9.0.post0
pip : 24.1
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2024.2
qtpy : None
pyqt5 : None
</details>
| 0easy
|
Title: [FEA] highlight pattern matches and submatches
Body: **Is your feature request related to a problem? Please describe.**
When looking at a graph, want to highlight some paths, such as by a search
Currently, we can search `g.chain(...)` or `g._nodes.query(...)` but hard to then superimpose back: `g.nodes(g._nodes.merge(g2._nodes, how='left', ...))`
**Describe the solution you'd like**
`g.chain([n(name=...), ...], mode='enrich', name='p1')`
This:
- returns the original graph rather than a subgraph
- labels every node, edge with `p1=True` if its a match
- ... default: `match=True` or first free (`match_1`, `match-2`, ...)
- for any named node/edge predicate, also set that column
Ideally this can be done multiple times and ~OR's vs overrides on named matches:
```g.chain(.., mode='enrich').chain(..., mode='enrich').encode_point_colors('match').plot()```
**Describe alternatives you've considered**
It may be also interesting to do something like `g1.enrich(g2, name='match', node_cols=['name'], edge_cols=[...])` as a rough compositional equivalent
**Additional context**
Came up in a context where multiple such ^^^ were used | 0easy
|
Title: [ENH] Add ability to convert CamelCase to snake_case by default in clean_names
Body: # Brief Description
`clean_names()` right now does not convert names from camel case to snake case. I think it should, and in the spirit of being opinionated, it should be enabled by default.
# Example API
The current function signature is:
```python
def clean_names(df, strip_underscores, case_type: str = 'lower', ...):
```
I would propose changing case_type to default to `snake`, rather than `lower`.
```python
def clean_names(df, strip_underscores, case_type: str = 'snake', ...):
```
# Notes
Some Googling around reveals the following implementations for CamelCase to snake_case conversion:
- [A GitHub gist](https://gist.github.com/jaytaylor/3660565)
- [StackOverflow post](https://gist.github.com/jaytaylor/3660565)
If the implementation is copied over from these two sources, I would ensure that the original code is properly referenced with at least a URL pointing to the original in the docstring. | 0easy
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.