text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: include_vars module unable to include new var with 'hash_behaviour: merge'
Body: ### Summary
When using `include_vars` module with `hash_behaviour: merge` it can only merge new keys into already existing hashes.
So in case when I want to define new hash or include new var along with extending existing in the same file I get an error:
"failed to combine variables, expected dicts but got a 'NoneType' and a 'AnsibleMapping'"
My ugly workaround is to define empty hashes on playbook level (group/host vars or play vars), and nest non-dict vars into another dict.
Tested on ansible-core 2.15.12, 2.16.12, 2.17.6
### Issue Type
Bug Report
### Component Name
include_vars
### Ansible Version
```console
ansible [core 2.17.6]
config file = /home/krokwen/PycharmProjects/tftc_devops/ansible/ansible.cfg
configured module search path = ['/home/krokwen/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/krokwen/.local/lib/python3.12/site-packages/ansible
ansible collection location = /home/krokwen/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.12.7 (main, Oct 1 2024, 00:00:00) [GCC 14.2.1 20240912 (Red Hat 14.2.1-3)] (/usr/bin/python3)
jinja version = 3.1.4
libyaml = True
```
### Configuration
```console
ANSIBLE_PIPELINING(<edited>/ansible/ansible.cfg) = True
CONFIG_FILE() = <edited>/ansible/ansible.cfg
DEFAULT_HOST_LIST(<edited>/ansible/ansible.cfg) = ['<edited>/ansible/hosts']
DEFAULT_VAULT_PASSWORD_FILE(<edited>/ansible/ansible.cfg) = <edited>/ansible/.vault_pass
EDITOR(env: EDITOR) = /usr/bin/nano
HOST_KEY_CHECKING(<edited>/ansible/ansible.cfg) = False
CONNECTION:
==========
local:
_____
pipelining(<edited>/ansible/ansible.cfg) = True
paramiko_ssh:
____________
host_key_checking(<edited>/ansible/ansible.cfg) = False
psrp:
____
pipelining(<edited>/ansible/ansible.cfg) = True
ssh:
___
host_key_checking(<edited>/ansible/ansible.cfg) = False
pipelining(<edited>/ansible/ansible.cfg) = True
winrm:
_____
pipelining(<edited>/ansible/ansible.cfg) = True
```
### OS / Environment
Fedora 40
### Steps to Reproduce
`merging_vars/file_a.yml`:
```yaml
hash_a:
a: qwe
hash_b:
a: asd
```
`merging_vars/file_b.yml`:
```yaml
hash_a:
b: ewq
hash_b:
b: dsa
hash_c:
b: zxc
var_1: cxz
```
play:
```yaml
- name: Reproduce merging bug
hosts: localhost
tasks:
- name: Include merging vars
ansible.builtin.include_vars:
file: merging_vars/{{ item }}.yml
hash_behaviour: merge
with_items:
- file_a
- file_b
vars:
hash_a: {}
hash_b: {}
```
### Expected Results
Expected to get following vars in play namespace:
```yaml
hash_a:
a: qwe
b: ewq
hash_b:
a: asd
b: dsa
hash_c:
b: zxc
var_1: cxz
```
### Actual Results
```console
TASK [Include grafana org vars] *******************************
task path: <edited>/ansible/monitoring.yml:23
fatal: [localhost]: FAILED! => {
"msg": "failed to combine variables, expected dicts but got a 'NoneType' and a 'AnsibleMapping': \nnull\n{\"b\": \"zxc\"}"
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | 0easy
|
Title: trend feature for GBM(s)
Body: **Problem:**
GBMs are, in a lot of areas of ML, the most robust/efficient/easy to use ML algorithm and it's great to have them in Darts.
However, they cannot extrapolate beyond the range of values from training set. Timeseries often have trends (linear, poly or exp) which will not be interacting with the other components of the TS but will cause the test set values to be outside of the training range, thus destroying the performance.
**Solution:**
This could be solved fairely simply by adding a trend parameter e.g(trend=None/poly1/pol2/exp) parameter which:
1. fits the trend
2. detrends the TS
3. pass the detrended TS to the underlying GBM model
4. Retrend the output from the GBM model
| 0easy
|
Title: 我在win7系统上跑的程序,可以运行demo和test,但训练就没办法了
Body: | 0easy
|
Title: import gensim error. Uses triu function from scipy.linalg which is deprecated
Body: <!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/g/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
#### Problem description
When importing gensim there is a inner dependency error with triu function of scipy.linalg as its deprecated.
| 0easy
|
Title: Support _circuit_diagram_info_protocol_ for customized rendering of tags in circuit diagrams
Body: This is a follow-up to #6411 which introduced `str()` as a default representation of tags in a text circuit diagrams.
This should be kept as a default method, but we would like to also support the `_circuit_diagram_info_protocol_`
for tags. `_circuit_diagram_info_protocol_` - if defined - would produce a specialized string for rendering the tag object in circuit diagrams.
**What is the urgency from your perspective for this issue? Is it blocking important work?**
P2 - we should do it in the next couple of quarters
| 0easy
|
Title: BUG: value_counts() returns error/wrong result with PyArrow categorical columns with nulls
Body: ### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
import pyarrow as pa
# First case: just one column. It gives the error below
pd.DataFrame( { 'A': [ 'a1', pd.NA ] }, dtype = pd.ArrowDtype( pa.dictionary( pa.int32(), pa.utf8() ) ) ).value_counts( dropna = False )
# Second case: more than one column. It gives the wrong result below
pd.concat( [
pd.DataFrame( { 'A': [ 'a1', 'a2' ], 'B': [ 'b1', pd.NA ] }, dtype = pd.ArrowDtype( pa.string() ) ),
pd.DataFrame( { 'C': [ 'c1', 'c2' ], 'D': [ 'd1', pd.NA ] }, dtype = pd.ArrowDtype( pa.dictionary( pa.int32(), pa.utf8() ) ) )
], axis = 1 ).value_counts( dropna = False )
```
### Issue Description
### First Case
It gives the following error:
```python-traceback
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[6], line 1
----> 1 pd.DataFrame( { 'A': [ 'a1', pd.NA ] }, dtype = pd.ArrowDtype( pa.dictionary( pa.int32(), pa.utf8() ) ) ).value_counts( dropna = False )
File C:\Python\Lib\site-packages\pandas\core\frame.py:7519, in DataFrame.value_counts(self, subset, normalize, sort, ascending, dropna)
7517 # Force MultiIndex for a list_like subset with a single column
7518 if is_list_like(subset) and len(subset) == 1: # type: ignore[arg-type]
-> 7519 counts.index = MultiIndex.from_arrays(
7520 [counts.index], names=[counts.index.name]
7521 )
7523 return counts
File C:\Python\Lib\site-packages\pandas\core\indexes\multi.py:533, in MultiIndex.from_arrays(cls, arrays, sortorder, names)
530 if len(arrays[i]) != len(arrays[i - 1]):
531 raise ValueError("all arrays must be same length")
--> 533 codes, levels = factorize_from_iterables(arrays)
534 if names is lib.no_default:
535 names = [getattr(arr, "name", None) for arr in arrays]
File C:\Python\Lib\site-packages\pandas\core\arrays\categorical.py:3069, in factorize_from_iterables(iterables)
3065 if len(iterables) == 0:
3066 # For consistency, it should return two empty lists.
3067 return [], []
-> 3069 codes, categories = zip(*(factorize_from_iterable(it) for it in iterables))
3070 return list(codes), list(categories)
File C:\Python\Lib\site-packages\pandas\core\arrays\categorical.py:3069, in <genexpr>(.0)
3065 if len(iterables) == 0:
3066 # For consistency, it should return two empty lists.
3067 return [], []
-> 3069 codes, categories = zip(*(factorize_from_iterable(it) for it in iterables))
3070 return list(codes), list(categories)
File C:\Python\Lib\site-packages\pandas\core\arrays\categorical.py:3042, in factorize_from_iterable(values)
3037 codes = values.codes
3038 else:
3039 # The value of ordered is irrelevant since we don't use cat as such,
3040 # but only the resulting categories, the order of which is independent
3041 # from ordered. Set ordered to False as default. See GH #15457
-> 3042 cat = Categorical(values, ordered=False)
3043 categories = cat.categories
3044 codes = cat.codes
File C:\Python\Lib\site-packages\pandas\core\arrays\categorical.py:451, in Categorical.__init__(self, values, categories, ordered, dtype, fastpath, copy)
447 if dtype.categories is None:
448 if isinstance(values.dtype, ArrowDtype) and issubclass(
449 values.dtype.type, CategoricalDtypeType
450 ):
--> 451 arr = values._pa_array.combine_chunks()
452 categories = arr.dictionary.to_pandas(types_mapper=ArrowDtype)
453 codes = arr.indices.to_numpy()
AttributeError: 'Index' object has no attribute '_pa_array'
```
Indeed, the same error is returned also if no `pd.NA` is present.
### Second case
It gives the following result:
```python
A B C D
a1 b1 c1 d1 1
a2 <NA> c2 d1 1
Name: count, dtype: int64
```
**Note that in second line D is d1 and not `<NA>`.**
A more complete example in this JupyterLab notebook: [value_counts() Bug.pdf](https://github.com/user-attachments/files/18133225/value_counts.Bug.pdf)
### Expected Behavior
The expected behavior is analogous to the result obtained with the NumPy backend.
### First case
```python
A
a1 1
<NA> 1
Name: count, dtype: int64
```
### Second case
```python
A B C D
a1 b1 c1 d1 1
a2 <NA> c2 <NA> 1
Name: count, dtype: int64
```
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.12.8
python-bits : 64
OS : Windows
OS-release : 2019Server
Version : 10.0.17763
machine : AMD64
processor : Intel64 Family 6 Model 165 Stepping 2, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : English_United States.1252
pandas : 2.2.3
numpy : 2.1.2
pytz : 2024.2
dateutil : 2.9.0.post0
pip : 24.3.1
Cython : None
sphinx : None
IPython : 8.29.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
blosc : None
bottleneck : 1.4.2
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.4
lxml.etree : 5.3.0
matplotlib : 3.9.2
numba : None
numexpr : 2.10.1
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 18.1.0
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : 0.9.0
xarray : None
xlrd : None
xlsxwriter : None
zstandard : 0.23.0
tzdata : 2024.2
qtpy : None
pyqt5 : None
</details>
| 0easy
|
Title: Type conversion: Remove support for deprecated `ByteString`
Body: [collections.abc.ByteString](https://docs.python.org/3/library/collections.abc.html#collections.abc.ByteString) that Robot's argument conversion supports has been deprecated in Python 3.12 and will be removed in Python 3.14. This basically makes the current Robot Framework code incompatible with Python 3.14 that's released already in 2025, and also means that there are deprecation warnings with Python 3.12 already now.
`ByteString` conversion isn't that useful and I believe the best way to handle these problems is to simply not support it. This is how conversion currently works if you use `arg: ByteString`:
1. If the given argument is an instance `bytes` or `bytearray` (i.e. is already an instance of `ByteString`), it is passed to the keyword as-is.
2. Otherwise, it is converted to `bytes` the same way as if the keyword would have used `arg: bytes`.
I don't consider these semantics too useful. I don't see how often you'd like to have a keyword that gets either `bytes` or `bytearray`, when you could use `arg: bytes` or `arg: bytearray` to get one of them. More importantly, if you have such a need, using `arg: bytes | bytearray` works exactly the same way and makes then intention more clear.
Removing the support for `ByteString` conversion is obviously a backwards incompatible change, but I doubt there are many users for it, they can easily use `bytes | bytearray` instead, and they anyway need to do that relatively soon when `ByteString` will be gone. I wish we would have done this earlier in RF 7.0 development cycle, but believe doing it still now before the release candidate is fine.
| 0easy
|
Title: Update `cirq.RANDOM_STATE_OR_SEED_LIKE` to support `np.random.Generator`
Body: **Is your feature request related to a use case or problem? Please describe.**
We have several modules that spwan multiple threads for performance. each of those threads would be running a random operation using a random state or state. when those threads share the same `RandomState` the multithreading degenerate into sequential processing since those threads will be waiting on write operation of the random state.
**Describe the solution you'd like**
Start supproting `np.random.Generator`. This class provides the same API of `np.random.RandomState` in addition to a `spawn` function which can be used to create independent streams of random values. This will help when starting threads e.g.
```py3
new_random_generators = prng.spwan(number_threads)
with ThreadPoolExecutor(max_workers=2) as pool:
# submit job i with prng new_random_generators[i]
```
**Describe alternatives/workarounds you've considered**
Before calling multithreads generate multipleseeds using a `np.random` or `RandomState`. While this seems like what we are doing with `np.random.Generator.spawn`; it's actually different in that the radom seeds and hence the random sequences created will correlate. This means that when running the same operation (e.g. simulation) multiple times in parallel, the results will correlate.
```
state_0 -> state_1 -> state_2 -> ...
\ \ \ \
v v v v
output_0 output_1 output_2 ....
```
**Additional context (e.g. screenshots)**
The cirq random number support is implemented in cirq-core/cirq/value/random_state.py
**What is the urgency from your perspective for this issue? Is it blocking important work?**
P2 - we should do it in the next couple of quarters
| 0easy
|
Title: Delete
Body: | 0easy
|
Title: Is it possible to add custom proxy?
Body: Can I use this with a custom OpenAI api proxy URL? | 0easy
|
Title: Exclude_deterministic argument in Predictive does not apply for models with discrete latents
Body: See the forum issue https://forum.pyro.ai/t/enumerate-support-for-batch-dimensions-of-custom-distribution/7656/4
We need to move the logic of exclude_deterministic to the `infer_discrete` branch https://github.com/pyro-ppl/numpyro/blob/3cde93d0f25490b9b90c1c423816c6cfd9ea23ed/numpyro/infer/util.py#L793-L818 | 0easy
|
Title: Page preview doesn't consider format
Body: Page preview needs to take into account format when previewing | 0easy
|
Title: [Bug]: Poly3DCollection initialization cannot properly handle parameter verts when it is a list of nested tuples and shade is False
Body: ### Bug summary
The initialization of an `mpl_toolkits.mplot3d.Poly3DCollection` object cannot properly handle the parameter `verts` when `verts` = a list of (N, 3) array-like nested tuples, and `shade=False`.
### Code for reproduction
```Python
from mpl_toolkits.mplot3d import art3d
corners = ((0, 0, 0), (0, 5, 0), (5, 5, 0), (5, 0, 0))
tri = art3d.Poly3DCollection([corners], shade=True) # Failed when shade=True
# tri = art3d.Poly3DCollection([corners]) # Passed with the default setting shade=False
```
### Actual outcome
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[3], line 1
----> 1 tri = art3d.Poly3DCollection([corners], shade=True)
File ~/anaconda3/envs/testmpl/lib/python3.12/site-packages/mpl_toolkits/mplot3d/art3d.py:905, in Poly3DCollection.__init__(self, verts, zsort, shade, lightsource, *args, **kwargs)
875 """
876 Parameters
877 ----------
(...)
902 and _edgecolors properties.
903 """
904 if shade:
--> 905 normals = _generate_normals(verts)
906 facecolors = kwargs.get('facecolors', None)
907 if facecolors is not None:
File ~/anaconda3/envs/testmpl/lib/python3.12/site-packages/mpl_toolkits/mplot3d/art3d.py:1222, in _generate_normals(polygons)
1220 n = len(ps)
1221 i1, i2, i3 = 0, n//3, 2*n//3
-> 1222 v1[poly_i, :] = ps[i1, :] - ps[i2, :]
1223 v2[poly_i, :] = ps[i2, :] - ps[i3, :]
1224 return np.cross(v1, v2)
TypeError: tuple indices must be integers or slices, not tuple
### Expected outcome
No error.
### Additional information
When `shade=True`, the `__init__` function will first call the function `_generate_normals(polygons)`, where `polygons=verts`. In our case, `verts` is not an instance of `np.ndarray` but a list, so it enters the for loop:
```python
for poly_i, ps in enumerate(polygons):
n = len(ps)
i1, i2, i3 = 0, n//3, 2*n//3
v1[poly_i, :] = ps[i1, :] - ps[i2, :]
v2[poly_i, :] = ps[i2, :] - ps[i3, :]
```
`polygons` is `[((0, 0, 0), (0, 5, 0), (5, 5, 0), (5, 0, 0))]`, and `ps` is a nested tuple, so we need to convert `ps` to `np.ndarray` before using the array slicing like `ps[i1, :]`. A possible fix may be setting `ps = np.asarray(ps)` before array slicing.
### Operating system
_No response_
### Matplotlib Version
3.9.2
### Matplotlib Backend
_No response_
### Python version
_No response_
### Jupyter version
_No response_
### Installation
None | 0easy
|
Title: If --py-template-base-dir is None, report run fails
Body: This means that the default behaviour of falling back to the notebook_template_examples does not work at present. | 0easy
|
Title: README outdated.
Body: ```markdown
Then, copy ca.crt, server.crt, and server.key to openvpn-server/secrets/.
```
Creating server.crt based on the instructions referenced in the README.md does not result in the creation of more than just the one file `ca.crt` it seems that `server.crt` and `server.key` do not exist.
```bash
➜ easyrsa3.ca git:(master) ✗ ls -lAh ./*/*
-rw------- 1 root root 1.2K Nov 19 00:33 ./pki/ca.crt
-rw------- 1 root root 0 Nov 19 00:33 ./pki/index.txt
-rw------- 1 root root 0 Nov 19 00:33 ./pki/index.txt.attr
-rw------- 1 root root 4.6K Nov 19 00:33 ./pki/openssl-easyrsa.cnf
-rw------- 1 root root 4.6K Nov 19 00:33 ./pki/safessl-easyrsa.cnf
-rw------- 1 root root 3 Nov 19 00:33 ./pki/serial
-rw-r--r-- 1 root root 300 Nov 19 00:33 ./x509-types/COMMON
-rw-r--r-- 1 root root 426 Nov 19 00:33 ./x509-types/ca
-rw-r--r-- 1 root root 192 Nov 19 00:33 ./x509-types/client
-rw-r--r-- 1 root root 193 Nov 19 00:33 ./x509-types/code-signing
-rw-r--r-- 1 root root 225 Nov 19 00:33 ./x509-types/email
-rw-r--r-- 1 root root 661 Nov 19 00:33 ./x509-types/kdc
-rw-r--r-- 1 root root 208 Nov 19 00:33 ./x509-types/server
-rw-r--r-- 1 root root 226 Nov 19 00:33 ./x509-types/serverClient
./pki/certs_by_serial:
total 0
./pki/issued:
total 0
./pki/private:
total 4.0K
-rw------- 1 root root 1.8K Nov 19 00:33 ca.key
./pki/renewed:
total 12K
drwx------ 2 root root 4.0K Nov 19 00:33 certs_by_serial
drwx------ 2 root root 4.0K Nov 19 00:33 private_by_serial
drwx------ 2 root root 4.0K Nov 19 00:33 reqs_by_serial
./pki/reqs:
total 0
./pki/revoked:
total 12K
drwx------ 2 root root 4.0K Nov 19 00:33 certs_by_serial
drwx------ 2 root root 4.0K Nov 19 00:33 private_by_serial
drwx------ 2 root root 4.0K Nov 19 00:33 reqs_by_serial
```
The instructions are likely sufficient but they seem unclear so I'd like to remedy this in the documentation by submitting an issue. | 0easy
|
Title: Improve error message when task function has wrong signature
Body: When a user declares a function with the argument `products` (instead of `product`), the error looks like this:
```
Error rendering task "get" initialized with function "get". Got unexpected arguments: ['product']. Missing arguments: ['products']. Pass ['products'] in "params"
```
This is confusing because the user may end up passing `products` under the params section. But under this scenario, the user really meant to pass `product`.
Solution: check the extra arguments, and see if there is one that is similar to `product` to detect typos, then modify the error message so it says something like:
```
You are passing products. Did you mean product?
```
We can use difflib to get close matches
| 0easy
|
Title: tox hangs but no subprocess running anymore
Body: ## Issue
Some times (not reproducable every time but i run into the issue quite often) tox does not terminate.
I run it with
```console
python3 -m tox run-parallel --conf "${config_dir}"/tox.ini --parallel auto --parallel-no-spinner --quiet
```
This command will then never terminate, but with htop it seems none of the external tools is running anymore:

When i add `--exit-and-dump-after 10` it does terminate and will produce the below output.
## Environment
Provide at least:
- OS: Ubuntu 20.04
- Python: 3.8.10
<details open>
<summary>Output when terminated using exit-and-dump-after...</summary>
```console
Timeout (0:00:10)!
Thread 0x00007f87e2ffd700 (most recent call first):
File "/myhome.../.venv/lib/python3.8/site-packages/tox/execute/local_sub_process/read_via_thread_unix.py", line 34 in _read_available
File "/myhome.../.venv/lib/python3.8/site-packages/tox/execute/local_sub_process/read_via_thread_unix.py", line 23 in _read_stream
File "/usr/lib/python3.8/threading.py", line 870 in run
File "/usr/lib/python3.8/threading.py", line 932 in _bootstrap_inner
File "/usr/lib/python3.8/threading.py", line 890 in _bootstrap
Thread 0x00007f87f9ffb700 (most recent call first):
File "/myhome.../.venv/lib/python3.8/site-packages/tox/execute/local_sub_process/read_via_thread_unix.py", line 34 in _read_available
File "/myhome.../.venv/lib/python3.8/site-packages/tox/execute/local_sub_process/read_via_thread_unix.py", line 23 in _read_stream
File "/usr/lib/python3.8/threading.py", line 870 in run
File "/usr/lib/python3.8/threading.py", line 932 in _bootstrap_inner
File "/usr/lib/python3.8/threading.py", line 890 in _bootstrap
Thread 0x00007f887dffb700 (most recent call first):
File "/usr/lib/python3.8/concurrent/futures/thread.py", line 78 in _worker
File "/usr/lib/python3.8/threading.py", line 870 in run
File "/usr/lib/python3.8/threading.py", line 932 in _bootstrap_inner
File "/usr/lib/python3.8/threading.py", line 890 in _bootstrap
Thread 0x00007f887e7fc700 (most recent call first):
File "/usr/lib/python3.8/concurrent/futures/thread.py", line 78 in _worker
File "/usr/lib/python3.8/threading.py", line 870 in run
File "/usr/lib/python3.8/threading.py", line 932 in _bootstrap_inner
File "/usr/lib/python3.8/threading.py", line 890 in _bootstrap
Thread 0x00007f887effd700 (most recent call first):
File "/usr/lib/python3.8/subprocess.py", line 1764 in _try_wait
File "/usr/lib/python3.8/subprocess.py", line 1806 in _wait
File "/usr/lib/python3.8/subprocess.py", line 1083 in wait
File "/myhome.../.venv/lib/python3.8/site-packages/tox/execute/local_sub_process/__init__.py", line 100 in wait
File "/myhome.../.venv/lib/python3.8/site-packages/tox/tox_env/api.py", line 388 in execute
File "/myhome.../.venv/lib/python3.8/site-packages/tox/session/cmd/run/single.py", line 106 in run_command_set
File "/myhome.../.venv/lib/python3.8/site-packages/tox/session/cmd/run/single.py", line 85 in run_commands
File "/myhome.../.venv/lib/python3.8/site-packages/tox/session/cmd/run/single.py", line 48 in _evaluate
File "/myhome.../.venv/lib/python3.8/site-packages/tox/session/cmd/run/single.py", line 36 in run_one
File "/myhome.../.venv/lib/python3.8/site-packages/tox/session/cmd/run/common.py", line 322 in _run
File "/usr/lib/python3.8/concurrent/futures/thread.py", line 57 in run
File "/usr/lib/python3.8/concurrent/futures/thread.py", line 80 in _worker
File "/usr/lib/python3.8/threading.py", line 870 in run
File "/usr/lib/python3.8/threading.py", line 932 in _bootstrap_inner
File "/usr/lib/python3.8/threading.py", line 890 in _bootstrap
Thread 0x00007f887f7fe700 (most recent call first):
File "/usr/lib/python3.8/concurrent/futures/thread.py", line 78 in _worker
File "/usr/lib/python3.8/threading.py", line 870 in run
File "/usr/lib/python3.8/threading.py", line 932 in _bootstrap_inner
File "/usr/lib/python3.8/threading.py", line 890 in _bootstrap
Thread 0x00007f887ffff700 (most recent call first):
File "/usr/lib/python3.8/concurrent/futures/thread.py", line 78 in _worker
File "/usr/lib/python3.8/threading.py", line 870 in run
File "/usr/lib/python3.8/threading.py", line 932 in _bootstrap_inner
File "/usr/lib/python3.8/threading.py", line 890 in _bootstrap
Thread 0x00007f889cf27700 (most recent call first):
File "/usr/lib/python3.8/concurrent/futures/thread.py", line 78 in _worker
File "/usr/lib/python3.8/threading.py", line 870 in run
File "/usr/lib/python3.8/threading.py", line 932 in _bootstrap_inner
File "/usr/lib/python3.8/threading.py", line 890 in _bootstrap
Thread 0x00007f889d728700 (most recent call first):
File "/usr/lib/python3.8/concurrent/futures/thread.py", line 78 in _worker
File "/usr/lib/python3.8/threading.py", line 870 in run
File "/usr/lib/python3.8/threading.py", line 932 in _bootstrap_inner
File "/usr/lib/python3.8/threading.py", line 890 in _bootstrap
Thread 0x00007f889df69700 (most recent call first):
File "/usr/lib/python3.8/concurrent/futures/thread.py", line 78 in _worker
File "/usr/lib/python3.8/threading.py", line 870 in run
File "/usr/lib/python3.8/threading.py", line 932 in _bootstrap_inner
File "/usr/lib/python3.8/threading.py", line 890 in _bootstrap
Thread 0x00007f889e76a700 (most recent call first):
File "/usr/lib/python3.8/concurrent/futures/thread.py", line 78 in _worker
File "/usr/lib/python3.8/threading.py", line 870 in run
File "/usr/lib/python3.8/threading.py", line 932 in _bootstrap_inner
File "/usr/lib/python3.8/threading.py", line 890 in _bootstrap
Thread 0x00007f889ef6b700 (most recent call first):
File "/usr/lib/python3.8/concurrent/futures/thread.py", line 78 in _worker
File "/usr/lib/python3.8/threading.py", line 870 in run
File "/usr/lib/python3.8/threading.py", line 932 in _bootstrap_inner
File "/usr/lib/python3.8/threading.py", line 890 in _bootstrap
Thread 0x00007f889f76c700 (most recent call first):
File "/usr/lib/python3.8/threading.py", line 306 in wait
File "/usr/lib/python3.8/threading.py", line 558 in wait
File "/myhome.../.venv/lib/python3.8/site-packages/tox/util/spinner.py", line 87 in render
File "/usr/lib/python3.8/threading.py", line 870 in run
File "/usr/lib/python3.8/threading.py", line 932 in _bootstrap_inner
File "/usr/lib/python3.8/threading.py", line 890 in _bootstrap
Thread 0x00007f889ff6d700 (most recent call first):
File "/usr/lib/python3.8/threading.py", line 302 in wait
File "/usr/lib/python3.8/threading.py", line 558 in wait
File "/usr/lib/python3.8/concurrent/futures/_base.py", line 244 in as_completed
File "/myhome.../.venv/lib/python3.8/site-packages/tox/session/cmd/run/common.py", line 346 in _queue_and_wait
File "/usr/lib/python3.8/threading.py", line 870 in run
File "/usr/lib/python3.8/threading.py", line 932 in _bootstrap_inner
File "/usr/lib/python3.8/threading.py", line 890 in _bootstrap
Thread 0x00007f88a1e38740 (most recent call first):
File "/usr/lib/python3.8/threading.py", line 1027 in _wait_for_tstate_lock
File "/usr/lib/python3.8/threading.py", line 1011 in join
File "/myhome.../.venv/lib/python3.8/site-packages/tox/session/cmd/run/common.py", line 251 in execute
File "/myhome.../.venv/lib/python3.8/site-packages/tox/session/cmd/run/parallel.py", line 85 in run_parallel
File "/myhome.../.venv/lib/python3.8/site-packages/tox/run.py", line 46 in main
File "/myhome.../.venv/lib/python3.8/site-packages/tox/run.py", line 20 in run
File "/myhome.../.venv/lib/python3.8/site-packages/tox/__main__.py", line 6 in <module>
File "/usr/lib/python3.8/runpy.py", line 87 in _run_code
File "/usr/lib/python3.8/runpy.py", line 194 in _run_module_as_main
```
</details>
## Output of running tox
<details open>
<summary>Tox output is simply incomplete:</summary>

(special characters at the end accidentally added)
</details>
## Minimal example
Not sure how to reproduce currently.
| 0easy
|
Title: parallel batching for `find()`
Body: In `docarray/utils/find.py` we have a helper function `find_batched()` that performs exact kNN search on multiple input queries.
The current implementation treats all queries as one big batch; It calculates all distances for all queries in one go.
In some scenarios this is not idea: If CPU is used to perform the vector similarity computation (instead of GPU), and the number of input queries is large, it would be nice to be able to split the workload across multiple threads and/or processes.
Possibly an implementation of this could leverage our `map_doc()` or `map_docs_batch()` functions to achieve this. | 0easy
|
Title: Add web browser interaction to the WS tutorial
Body: Add a simple web page that is served from the tutorial's app, and make it connect and send/recv things over WS in a more visual way. Because that's what the majority of users use WS for 🙂 | 0easy
|
Title: Marketplace - agent page - weird background color on certain elements, please get rid of bg color
Body: ### Describe your issue.
<img width="1513" alt="Screenshot 2024-12-17 at 19 21 49" src="https://github.com/user-attachments/assets/bad71cfa-1210-49b8-b95e-a8d86a746de1" />
<img width="1534" alt="Screenshot 2024-12-17 at 19 24 41" src="https://github.com/user-attachments/assets/9641bd75-d0f6-4893-8841-de113a2e22cd" />
In dark mode, these areas have a weird background color. Can we get rid of the bg color?
| 0easy
|
Title: Fail CI fast on example failure
Body: Instead of running all notebook examples and then looking for errors as is currently done, we should fail fast. Fail the CI build as soon as one of the examples produces an error.
Some discussion here: https://github.com/scikit-optimize/scikit-optimize/pull/208#issuecomment-242785338 also on how the current failure detection is done.
| 0easy
|
Title: Add more and better tests and test cases
Body: # Sample code
Please checkout how the sample test cases in:
- https://github.com/mithi/hexapod-robot-simulator/tree/master/tests
# References
Get started with testing python
- https://realpython.com/python-testing/
Write Professional Unit Tests in Python
- https://code.tutsplus.com/tutorials/write-professional-unit-tests-in-python--cms-25835
Web Automation Tests with Selenium
- https://www.browserstack.com/guide/python-selenium-to-run-web-automation-test
| 0easy
|
Title: Add brotlicffi support
Body: Currently, brotli compression is supported when using `brotli` or `brotlipy` (deprecated). We should also support it thorugh `brotlicffi`, the new name of `brotlipy`, which performs worse than `brotli` but works on PyPy. | 0easy
|
Title: ansible-galaxy should dedupe collections from python install
Body: ### Summary
When I run `ansible-galaxy collection list` it lists the same collection as coming from:
* /home/florian/.local/share/pipx/venvs/ansible/lib/python3.13/site-packages/ansible_collections
* /home/florian/.local/share/pipx/venvs/ansible/lib64/python3.13/site-packages/ansible_collections
The only difference in this path is `lib` vs `lib64` of my venv. An ls shows that `lib64` is simply a symlink to `lib`:
```
$ ls -l /home/florian/.local/share/pipx/venvs/ansible/ total 28
drwxr-xr-x. 1 florian florian 650 13. Jan 13:01 bin
drwxr-xr-x. 1 florian florian 20 22. Okt 08:42 include
drwxr-xr-x. 1 florian florian 20 22. Okt 08:42 lib
lrwxrwxrwx. 1 florian florian 3 22. Okt 08:42 lib64 -> lib
-rw-r--r--. 1 florian florian 19885 16. Jan 14:39 pipx_metadata.json
-rw-r--r--. 1 florian florian 201 22. Okt 08:42 pyvenv.cfg
```
It would be great if ansible-galaxy could dedupe those.
### Issue Type
Feature Idea
### Component Name
ansible-galaxy
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [x] I agree to follow the Ansible Code of Conduct | 0easy
|
Title: SDK task.upload_data() cannot be used with resources of the Path type
Body: ### Actions before raising this issue
- [x] I searched the existing issues and did not find anything similar.
- [x] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
SDK has a `Task.upload_data()`, which claims to support `StrPath`-typed `resource`s, but it is actually only works for `LOCAL` `resource_type`.
https://github.com/cvat-ai/cvat/blob/fec040db35eb5f69694cded87e2c0ba8ebb20199/cvat-sdk/cvat_sdk/core/proxies/tasks.py#L73
### Expected Behavior
Task should be created.
### Possible Solution
Remove invalid error here: https://github.com/cvat-ai/cvat/blob/fec040db35eb5f69694cded87e2c0ba8ebb20199/cvat-sdk/cvat_sdk/core/proxies/tasks.py#L112-L114, replace it with a type conversion.
### Context
_No response_
### Environment
```Markdown
``` | 0easy
|
Title: Bug in `Return From Keyword If` documentation
Body: In return_from_keyword_if function's description - documentation wrong description is given.
Proposed wording is in **bold**:
*NOTE:* Robot Framework 5.0 added support for native ``RETURN`` statement
and for inline ``IF``, and that combination should be used instead of this
keyword. For example, ``Return From Keyword **If**`` usage in the example below
could be replaced with
Given the same example as in `Return From Keyword **If**`, we can rewrite the
`Find Index` keyword as follows: | 0easy
|
Title: Args 'title', 'version' and 'openapi' ignored.
Body: Commit 'd24b921cdc4e19f062f8ae3dad01a5eaa69180dd' caused args `title`, `version` and `openapi` of the `API` intializer to be ignored. The OpenAPISchema is now initialized with hard-coded values which were probably meant as default fall-backs.
```
title="Web Service",
version="1.0",
openapi="3.0.2",
``` | 0easy
|
Title: TypeError for path
Body: **Describe the bug**
I am using DemoGPT on MAC, and ran into TypeError once I submitted.
**To Reproduce**
Steps to reproduce the behavior:
1. On terminal, run app with command `streamlit run app.py`
2. Click on the example Language Translator
3. Click submit button
4. See error
**Error Message**
`
Language Translator 📝
2023-07-03 17:26:34.783 Uncaught app exception
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 552, in _run_script
exec(code, module.__dict__)
File "/Users/mchuang/Desktop/Testing tools/DemoGPT/src/prompt_based/app.py", line 67, in <module>
for data in generate_response(demo_idea):
File "/Users/mchuang/Desktop/Testing tools/DemoGPT/src/prompt_based/app.py", line 16, in generate_response
for data in agent(txt,num_of_iterations):
File "/Users/mchuang/Desktop/Testing tools/DemoGPT/src/prompt_based/model.py", line 69, in __call__
response, error = self.run_python(total_code)
File "/Users/mchuang/Desktop/Testing tools/DemoGPT/src/prompt_based/model.py", line 35, in run_python
process = subprocess.Popen([python_path,tmp.name], env=environmental_variables,stdout=PIPE, stderr=PIPE)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 966, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 1717, in _execute_child
and os.path.dirname(executable)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/posixpath.py", line 152, in dirname
p = os.fspath(p)
TypeError: expected str, bytes or os.PathLike object, not NoneType
`
**Screenshots**
<img width="751" alt="image" src="https://github.com/melih-unsal/DemoGPT/assets/25274391/1fd0cbd5-a9ba-4e0f-a576-eb9806724799">
**Desktop (please complete the following information):**
- OS: macOS Monterey ver.12.3.1
- Browser: Chrome
- Version: 114.0.5735.106
| 0easy
|
Title: Text customize
Body: There is some text on the page, but it is hard coding in your index.js.
The better way is to make it customizable by parsing markdown or reading from the config file.
But, I think Gatsby is not as convenient as next.js in terms of reading data.... so do you consider changing the page framework? 😳 | 0easy
|
Title: 遇到的问题和开发者更新说明贴
Body: # 关于此贴
此贴用于更新一些**开发说明**,和**遇到的故障**解决以及**列出详细的问题**和**提问交流**
**使用前请确保使用的是仓库最新推送的代码**
## 关于如何提issues以及避免无效提问
没有**任何**或**详细**的描述且问题可以在**网上搜到**以及**以前解决过**的issue,请不要随意提交。你不会得到任何的解决方案。
https://github.com/Johnserf-Seed/Stop-Ask-Questions-The-Stupid-Ways
## 关于终端
我用的是Windows Terminal <a href="https://aka.ms/terminal" data-linktype="external" class="button button-primary button-filled has-text-wrap">安装 Windows 终端</a>

## 关于No module named 'xxx'
在项目文件夹空白处中按```shift + 鼠标右键``` 选择```在终端中打开``` 或 ```在此处打开xxx shell```
输入 ```pip install -r requirements.txt``` 安装项目依赖
如有其他依赖报错在此贴下回复
## 关于UnboundLocalError: local variable 'response' referenced before assignment
没有正确启动本地```Server```服务,确保正确安装```Node.js```,终端进入```Util```目录输入```npm i```安装node依赖,然后启动```Server.bat```即可
## 关于[WinError 10061] 由于目标计算机积极拒绝,无法连接xxxx
同上```关于UnboundLocalError: local variable 'response' referenced before assignment```一样的解决方法
## 关于更新
最近抖音更新非常频繁,所以本项目也在频繁更新。
**Q:为什么用了更新的版本还是无法运行?**
A:更新完之后exe需要同步打包才能应用更新或者和我更新的代码存在差异。
为了保证大家用的代码和我开发环境一致,[Release](https://github.com/Johnserf-Seed/TikTokDownload/releases/) 页面提供了最新打包好的exe文件的md5。
每次在```TikTokTool```中更新完之后用```build.bat```重新打包文件,或者直接使用py运行,才是最新可运行的版本。
## 关于网络

**Q:为什么我用了代理还是出现网络不太通畅?**
A:可能是github自己抽风,可以检查本地dns有无污染,向host文件填写代理ip后重置电脑网络缓存 ```ipconfig /flushdns``` 后再试。
推荐没有代理的人使用[羽翼城](https://www.dogfight360.com/)大佬的工具 [steamcommunity_302](https://www.dogfight360.com/blog/686/)

## 关于配置文件
填```odin_tt```,```sessionid_ss```,```ttwid```,```passport_csrf_token```,```msToken```的值就可以了,如下
<img src="https://tvax2.sinaimg.cn/large/006908GAly1hcmjw94bqcj30zv05l79v.jpg" alt="image" width="800">
## 日志
+ 更新包地址切换回官方 https://github.com/Johnserf-Seed/TikTokDownload/archive/master.zip
+ 配置文件新增几大功能 (后续更新)
+ <img src="https://camo.githubusercontent.com/d7e666d8fd88abfdec54588ac185b7f21d2efb52bc4fc846a6e571eecc139cc3/68747470733a2f2f74766178312e73696e61696d672e636e2f6c617267652f30303639303847416c79316862757772396d3067306a33306a393069696b31382e6a7067" alt="image" width="700" data-width="500">
## 镜像加速站替换更新链接
- https://gh.api.99988866.xyz/ (旧用,暂挂)
- https://g.ioiox.com/ (已挂)
- https://github.welab.eu.org/ (未测)
- https://tool.mintimate.cn/gh/ (未测)
- https://pd.zwc365.com/ (未测)
<a href="https://info.flagcounter.com/qT4H"><img src="https://s11.flagcounter.com/count2/qT4H/bg_FFFFFF/txt_000000/border_CCCCCC/columns_5/maxflags_20/viewers_0/labels_1/pageviews_1/flags_0/percent_0/" alt="Flag Counter" border="0"></a>
更新于 2023/02/09 18:58 | 0easy
|
Title: Need to be able to zoom out more in the builder
Body: | 0easy
|
Title: metric API
Body: The canonical definition is here: https://chaoss.community/?p=3464 | 0easy
|
Title: uncontrollable chromium process caused by 'http.client.BadStatusLine: GET /json/version HTTP/1.1' exception
Body: ```python
browser = await launch(args)
```
Code above may cause 'http.client.BadStatusLine: GET /json/version HTTP/1.1' exception, this exception raised by urllib.urlopen().
In pyppeteer, urlopen used in `launcher.py`, line 222, code just like this
```python
def get_ws_endpoint(url) -> str:
url = url + '/json/version'
timeout = time.time() + 30
while (True):
if time.time() > timeout:
raise BrowserError('Browser closed unexpectedly:\n')
try:
with urlopen(url) as f:
data = json.loads(f.read().decode())
break
except URLError as e:
continue
time.sleep(0.1)
return data['webSocketDebuggerUrl']
```
the exception belong to http.client.HTTPException, not URLError. so, the `try ... except` can not catch the exception.
In 'launch' method, browser can not be returned successfully.
```python
connectionDelay = self.slowMo
_self.browserWSEndpoint = get_ws_endpoint(self.url)_ # when this method raise exceptions, the following code is unreachable
logger.info(f'Browser listening on: {self.browserWSEndpoint}')
self.connection = Connection(self.browserWSEndpoint, self._loop, connectionDelay, )
browser = await Browser.create(self.connection, [], self.ignoreHTTPSErrors, self.defaultViewport, self.proc,
self.killChrome)
await self.ensureInitialPage(browser)
return browser
```
but, the chromium process is launched before. so, I just can't control this process when 'http.client.BadStatusLine: GET /json/version HTTP/1.1' exception occured.
```python
self.proc = subprocess.Popen( # type: ignore
self.cmd, **options, ) # launcher.py line 146
```
In my opinion, the browser process should launch after called `Connection` method,this way obey the 'proximity rule' also. | 0easy
|
Title: Better data type detection for pre_aggregated, indexed dataframes
Body: When a dataframe is pre-aggregated, our type detection based on cardinality often fail to detect the type correctly. For example, when the dataset size is small (often the case when data is pre-aggregated), nominal fields would get recognized as a `quantitative` type.
```
df = pd.read_csv("lux/data/car.csv")
df["Year"] = pd.to_datetime(df["Year"], format='%Y') # change pandas dtype for the column "Year" to datetype
a = df.groupby("Cylinders").mean()
a.data_type
```
<img src="https://user-images.githubusercontent.com/5554675/90112958-3f428600-dd83-11ea-9645-187788fb29e3.png" width=250></img>
As a related issue, we should also support the detection of types for named index, for example, in this case, `Cylinders` is an index, so its data type is not being computed. | 0easy
|
Title: Filter columns when doing predictions
Body: When doing predictions please use only columns that were used for model building. | 0easy
|
Title: Error on hover event of parallel coordinates graph component
Body: I am trying to associate a callback to the hover event in a parallel coordinates plot, but it does not seem to work properly. More specifically, it raises the following JS error in my Dev Console:

Any clue? | 0easy
|
Title: [New feature] Add apply_to_images to Defocus
Body: | 0easy
|
Title: Marketplace - agent page - Change font of agent name
Body: ### Describe your issue.
<img width="743" alt="Screenshot 2024-12-17 at 19 12 16" src="https://github.com/user-attachments/assets/3c3f762c-6502-482a-bd39-c6c363b53d50" />
Change this to the "h2" style in the typography styleguide: https://www.figma.com/design/Ll8EOTAVIlNlbfOCqa1fG9/Agent-Store-V2?node-id=2759-9596&t=2JI1c3X9fIXeTTbE-1
**Font style:**
font-family: Poppins;
font-size: 35px;
font-weight: 500;
line-height: 40px;
letter-spacing: -0.0075em;
text-align: left;
text-underline-position: from-font;
text-decoration-skip-ink: none;
**Font colors:**
background: var(--neutral-900, #171717);
| 0easy
|
Title: Source kept as "auto" when changing just category name
Body: ### Actions before raising this issue
- [x] I searched the existing issues and did not find anything similar.
- [x] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
1. create a bounding box with automatic annotation so that it is marked as `AUTO`
2. change the size of the box --> it will be marked as `SEMI-AUTO`
3. create another bounding box with automatic annotation so that it is marked as `AUTO`
4. change the label category only without modifying the bounding box dimensions --> the annotation is still marked as `AUTO`
In addition, I'd like to have as feature enhancement the possibility to have a custom source when using nuclio functions. As of now detectors leads to `AUTO` annotations while interactors leads to `SEMI-AUTO` annotations which makes sense in general. However I would like to be able to set custom source as well and when the source is not specified it falls back to the default `AUTO`/`SEMI-AUTO`.
### Expected Behavior
I would expect a "change of category" to imply a change of source as well from `AUTO` to `SEMI-AUTO`.
### Possible Solution
_No response_
### Context
_No response_
### Environment
```Markdown
``` | 0easy
|
Title: feat: parser: walrus assignment operator without parentheses
Body: I found that using walrus pretty useful with [decroator aliases](https://github.com/anki-code/xontrib-dalias) becuase it prints the result object immediately and the result stays in variable:
```xsh
(r := $(@json echo '["hello", "world"]'))
# ["hello", "world"]
r[0]
# 'hello'
```
It will be cool to rid of brackets i.e. treat this example as python mode instead of subprocess:
```xsh
r := 1
# xonsh: subprocess mode: command not found: 'r'
```
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| 0easy
|
Title: use `str` instead of `repr` for tags in circuit diagram
Body: **Is your feature request related to a use case or problem? Please describe.**
currently tags are displayed in circuit diagrams via their `repr` ([implicitly](https://github.com/quantumlib/Cirq/blob/4d35768c2a21204d430db5f2b389663e03a9351e/cirq-core/cirq/protocols/circuit_diagram_info_protocol.py#L356-L357) via `str(list(tags))`). This can lead to some unruly or hard-to-read circuit diagrams, e.g.:
```
1: ───X^0.5───Rz(0.928π)[cirq.VirtualTag()]───X^0.5───Rz(π)[cirq.VirtualTag()]───────@──────────────────────────────────
│
2: ───X^0.5───Rz(0.072π)[cirq.VirtualTag()]───X^0.5───Rz(-0.5π)[cirq.VirtualTag()]───X───Rz(-1.5π)[cirq.VirtualTag()]───
```
this could be cleaned up by overriding `tag.__repr__`, but usually not without breaking the expectation that `eval(repr(tagged_op)) == tagged_op`
**Describe the solution you'd like**
it seems like it'd be much more natural (and easier to customize) if the circuit diagram drawer to instead use the tags' strings, e.g.:
```
1: ───X^0.5───Rz(0.928π)[<virtual>]───X^0.5───Rz(π)[<virtual>]───────@──────────────────────────
│
2: ───X^0.5───Rz(0.072π)[<virtual>]───X^0.5───Rz(-0.5π)[<virtual>]───X───Rz(-1.5π)[<virtual>]───
```
**What is the urgency from your perspective for this issue? Is it blocking important work?**
P3 - I'm not really blocked by it, it is an idea I'd like to discuss / suggestion based on principle
(happy to work on this if it seems reasonable) | 0easy
|
Title: replace nouislider with ant slider in quickstart
Body: The ant design slider component is better supported so it should be replaced. | 0easy
|
Title: [New feature] Add `apply_to_images` to `AdvancedBlur`
Body: | 0easy
|
Title: Doc: Explore Sphinx options if there is a flag to emit a warning on broken links
Body: Setting `nitpicky = True` in `conf.py` or passing the `-n` command line argument will warn on broken links. | 0easy
|
Title: investigate recycling pipes
Body: Investigate recycling of Pipe connections.
Let’s merge it as a P0, but please add an issue to investigate recycling pipes
_Originally posted by @lantiga in https://github.com/Lightning-AI/LitServe/pull/108#pullrequestreview-2073769140_
| 0easy
|
Title: Handle dash.exceptions.PreventUpdate
Body: As reported in #205, when a callback raises PreventUpdate the wrapper code does not handle it.
A change in [views.py](https://github.com/GibbsConsulting/django-plotly-dash/blob/master/django_plotly_dash/views.py) should allow these exceptions to be handled. The `update` function needs to catch this exception and return an empty response with a status code of 204; this is how the underlying [dash](https://github.com/plotly/dash/blob/2d735aa250fc67b14dc8f6a337d15a16b7cbd6f8/dash/dash.py#L381) code handles this response.
| 0easy
|
Title: Tradeogre with pandas-ta
Body: Hi
how to apply it to the tradeogre ?? here I have a code for a simple bot for this exchange, but I would like to add indicators like Rsi, Macd Ema etc. to it. It is possible at all with this exchange. here is my channel with code
https://github.com/RAPER200X/TO-TRADER
| 0easy
|
Title: [ENH] Proposal: Enforce automagic column name conversion to string type in `clean_names`
Body: UPDATE FROM MAINTAINERS: ANYBODY WHO IS INTERESTED IN THIS ISSUE, PLEASE SEE [THIS COMMENT](https://github.com/ericmjl/pyjanitor/issues/612#issuecomment-557250963) FOR PROPOSED CODE.
------
# Brief Description
`clean_names` method does not work when an integer is used as column name
# Minimally Reproducible Code
```
rankings = {
"countries_to_play_cricket": [
"India",
"South Africa",
"England",
"New Zealand",
"Australia",
],
0: ["England", "India", "New Zealand", "South Africa", "Pakistan"],
"t20": ["Pakistan", "India", "Australia", "England", "New Zealand"],
}
rankings_pd = pd.DataFrame(rankings)
df = rankings_pd.clean_names()
```
# Error Messages
```
AttributeError: 'int' object has no attribute 'lower'
```
If I change the column name integer 0 to string "0" then there will be no error. But in that case it defeats the purpose of using a method :) | 0easy
|
Title: [BUG] Logical operators are not type compliant when using bool operations
Body: **Describe the bug**
Logical operators are not type compliant when using bool operations (eg. `Or(Product.price<10, Product.category=="Sweets")`)
**To Reproduce**
Just use the same example from docs:
```python
class Product(Document):
price: float
category: str
Or(Product.price<10, Product.category=="Sweets")
```
It seems that logical ops are missing bool in the accepted argument types: https://github.com/BeanieODM/beanie/blob/main/beanie/odm/operators/find/logical.py#L16
**Expected behavior**
Expected to work normally with type checking. Instead, got the error:
```
Argument 1 to "Or" has incompatible type "bool"; expected "BaseFindOperator | dict[str, Any] | Mapping[str, Any]"Mypy[arg-type](https://mypy.readthedocs.io/en/latest/_refs.html#code-arg-type)
Argument 2 to "Or" has incompatible type "bool"; expected "BaseFindOperator | dict[str, Any] | Mapping[str, Any]"Mypy[arg-type](https://mypy.readthedocs.io/en/latest/_refs.html#code-arg-type)
```
Regular `find` works ok with bool operations as it accepts `bool` as arg type:
```py
def find(
*args: Mapping[str, Any] | bool,
...
```
| 0easy
|
Title: Stop using failing ujson as default serializer
Body: ujson is no more actively maintained and specially for some critical bugs not present in json package.
one example forcing us to revert ujson usage:
https://github.com/esnme/ultrajson/issues/325 (2 days ago, mine)
https://github.com/esnme/ultrajson/issues/301 (same but 1 year ago and no answer)
| 0easy
|
Title: Response Error: b')]}\\'\\n\\n38\\n[[\"wrb.fr\",null,null,null,null,[8]]]\\n54\\n[[\"di\",59],[\"af.httprm\",59,\"4239016509367430469\",0]]\\n25\\n[[\"e\",4,null,null,129]]\\n'. \nTemporarily unavailable due to traffic or an error in cookie values. Please double-check the cookie values and verify your network environment.
Body:
Hi, there:
I encountered the response error using `bardapi` to `ask_about_image`:
`"Response Error: b')]}\\'\\n\\n38\\n[[\"wrb.fr\",null,null,null,null,[8]]]\\n54\\n[[\"di\",59],[\"af.httprm\",59,\"4239016509367430469\",0]]\\n25\\n[[\"e\",4,null,null,129]]\\n'. \nTemporarily unavailable due to traffic or an error in cookie values. Please double-check the cookie values and verify your network environment."`
Then I tried again the Bard UI, and got this warning message.
Is anyone seeing this warning message? And what can we do about it?
| 0easy
|
Title: provide an api to evaluate models
Body: provide a way to make evaluations and store it
| 0easy
|
Title: UnboundLocalError: local variable 'account' referenced before assignment
Body: Error:

twitter.json file:

| 0easy
|
Title: DOC: Add make_valid to API documentation
Body: `make_valid` was added in #2539. However, it is not exposed in the API documentation on the website. | 0easy
|
Title: [DOC] add common unequal length and multivariate patterns to the classification tutorial
Body: Newer patterns to handle unequal length and multivariate data should be added to the classification tutorial.
* handling multivariate data using the "sum of independent distances" approach and arithmetic combinations - `IndepDist`
* handling multivariate data by constructing univariate distances on individual components and combining them - `CombinedDistance` and variable subsetting
* handling unequal length data by using custom distances that are valid for unequal length time series, see here: https://github.com/sktime/sktime/discussions/7232
The appropriate place might be the sections already discussing multivariate and unequal length data, if exists, otherwise a good place should be found. | 0easy
|
Title: `sys.tracebacklimit = 0` causes error for simple numba script
Body: <!--
Thanks for opening an issue! To help the Numba team handle your information
efficiently, please first ensure that there is no other issue present that
already describes the issue you have
(search at https://github.com/numba/numba/issues?&q=is%3Aissue).
-->
## Reporting a bug
Simple python script fails when `sys.tracebacklimit = 0`:
```
import sys
import numba
sys.tracebacklimit = 0
@numba.njit
def foo():
d = {}
d[0] = 0
foo()
```
It works OK if `sys.tracebacklimit` is not set, set to `None` or even to `1`
<!--
Before submitting a bug report please ensure that you can check off these boxes:
-->
- [x] I have tried using the latest released version of Numba (most recent is
visible in the release notes
(https://numba.readthedocs.io/en/stable/release-notes-overview.html).
- [x] I have included a self contained code sample to reproduce the problem.
i.e. it's possible to run as 'python bug.py'.
<!--
Please include details of the bug here, including, if applicable, what you
expected to happen!
-->
| 0easy
|
Title: Admin: Fix popup for reverse_delete
Body: Now the message for delete and reverse_delete is the same (delete message).
It could be good to modify for reverse_delete being more accurate. | 0easy
|
Title: Split /search follow ups into follow ups with search, and follow ups without search
Body: Follow up without search would just query the index, with search would do another search for the follow up question and also do a follow up response style | 0easy
|
Title: Simplify the command to download the data
Body: ## 🌟 Feature Description
<!-- A clear and concise description of the feature proposal -->
Currently, user should use the following command to download the data.

This requires the following steps.
1. Install Qlib
2. clone the source code of Qlib
3. Run the script to download the data
This can be simplified by integrating the downloading script into a Qlib module. So users can download data with only two steps
1. Install Qlib
2. Run qlib module to download data like this `python -m qlib.XXX.get_data qlib_data --target_dir ~/.qlib/qlib_data/cn_data --region cn`
| 0easy
|
Title: [BUG] ValueError in _RepeatingBasis
Body: I was running code coverage to see what code was not covered.

@RensDimmendaal it feels like the message in the error is off, would I be correct? If so, this might be a great beginner issue. | 0easy
|
Title: Toggle switches
Body: Similar to [this](https://getbootstrap.com/docs/5.1/forms/checks-radios/#switches), we should allow them as well as checkboxes in forms. | 0easy
|
Title: Add more unittests for asyncio stream mode
Body: It's not well covered. | 0easy
|
Title: [BUG-REPORT] Vaex `to_dask_array` fails `AttributeError: 'ValueError' object has no attribute 'numpy'`
Body: Thank you for reaching out and helping us improve Vaex!
Before you submit a new Issue, please read through the [documentation](https://docs.vaex.io/en/latest/). Also, make sure you search through the Open and Closed Issues - your problem may already be discussed or addressed.
**Description**
Please provide a clear and concise description of the problem. This should contain all the steps needed to reproduce the problem. A minimal code example that exposes the problem is very appreciated.
**Software information**
- Vaex version (`import vaex; vaex.__version__)`: {'vaex-core': '4.8.0', 'vaex-hdf5': '0.11.1'}
- Vaex was installed via: pip / conda-forge / from source pip
- OS: MacOS
**Additional information**
Please state any supplementary information or provide additional context for the problem (e.g. screenshots, data, etc..).
```
import vaex
def tolist(x,y):
return list([x,y])
data = {'A':[1,2,1],'B':['a','b','c'], 'C':['d', 'e', 'f']}
df = vaex.from_dict(data)
df.to_dask_array()
```
| 0easy
|
Title: Legend for ridgeplots with multiple traces per line
Body:
### Discussed in https://github.com/tpvasconcelos/ridgeplot/discussions/314
<div type='discussions-op-text'>
<sup>Originally posted by **abrahme** February 3, 2025</sup>
I was wondering if there was a way to add a legend for each color present in a ridge-plot? For example,
https://ridgeplot.readthedocs.io/en/stable/getting_started/getting_started.html the last example with min and max temperature densities per month are blue and red respectively. However, it would be nice to have a legend on the side indicating that red is max temperature and blue is min temperature. Is it as simple as adding a legend for a typical plotly graph?</div> | 0easy
|
Title: Example fails while building doc
Body: Building the doc fails for example `40_advanced/example_single_configurations` on the current development branch
[Logs here](https://github.com/automl/auto-sklearn/runs/3190465769)
```python
...
generating gallery for examples/40_advanced... [ 50%] example_debug_logging.py
Warning, treated as error:
/home/runner/work/auto-sklearn/auto-sklearn/examples/40_advanced/example_single_configuration.py failed to execute correctly: Traceback (most recent call last):
File "/home/runner/work/auto-sklearn/auto-sklearn/examples/40_advanced/example_single_configuration.py", line 75, in <module>
print(pipeline.named_steps)
AttributeError: 'NoneType' object has no attribute 'named_steps'
...
```
| 0easy
|
Title: api: Extend validity of domain renewal confirmation links
Body: This only affects dynDNS domains.
Currently, the validity is 24 hours. It should be at least a week, or until the scheduled domain deletion date.
The validity period is not encoded in the link, but determined at validation time. At that point, the payload (including domain name) is not yet known, so using the scheduled domain deletion date is not an option.
Alternatively, we may just set it to 4 weeks: We delete domains 4 weeks after the first notification, so that would cover all cases.
No expiration would also work (links are invalidated anyway when the domain owner or other significant things change). ~However, that opens the opportunity for people to renew their domain infinitely without reading their emails. That's not what we want - if people want automation, they should renew their domain by changing some DNS information.~ `Domain.renewal_changed` is included in the action state, so using the link invalidates it as well.
Implementation proposal: At validation time in `AuthenticatedActionSerializer`, use `self.context['view']` to retrieve the view, and make the validity period an attribute of the view classes (with a reasonable default in the base class). | 0easy
|
Title: Replace eval() with json.load()
Body: AAAaaaaaaaaahhhhhhhhhhhhhhhhhh!!!!!!!
https://github.com/mlfoundations/open_clip/blob/91f6cce16b7bee90b3b5d38ca305b5b3b67cc200/src/training/data.py#L59 | 0easy
|
Title: Add a "Hide all Chats" button
Body: I created this lame code to delete all of my conversations using Javascript, which I then put on my browser's developer console to run.
```
// Version 2.0
var xpath = "/html/body/div[1]/div/div/div[2]/div[1]/div/div[2]/div[1]/div[2]/div/div/div";
var elements = document.evaluate(xpath + '//a', document, null, XPathResult.ORDERED_NODE_SNAPSHOT_TYPE, null);
var aTag_count = elements.snapshotLength;
console.log(aTag_count);
for (let i = 0; i < aTag_count; i++) {
var DeleteConversation_button = document.evaluate(xpath + '//a[' + (i + 1) + ']/span/div/div/div/button[2]', document, null, XPathResult.FIRST_ORDERED_NODE_TYPE, null).singleNodeValue;
if (DeleteConversation_button) {
DeleteConversation_button.click();
} else {
console.log("Delete Conversation button not found.");
}
}
setTimeout(function() {
for (let i = 0; i < aTag_count; i++) {
var Dialog_button = document.evaluate(xpath + '//a[' + (i + 1) + ']/span/div/div[2]/div/div[2]/div/section/footer/button[2]', document, null, XPathResult.FIRST_ORDERED_NODE_TYPE, null).singleNodeValue;
if (Dialog_button) {
Dialog_button.click();
} else {
console.log("Dialog button not found.");
}
}
}, 200); // 200 milisecond delay
```
But I'm suggesting having something like a "Hide all Conversations" button or checkboxes to select a particular conversation to be deleted, hidden, or opted out of. I know that opting out and deleting is a bad idea for the sake of training the model. | 0easy
|
Title: Non-string value in zappa_settings.environment_variables caused AttributeError
Body: Dear Zappatistas,
I came across this minor thing: I had the following in my zappa_settings.json
`"environment_variables": {"FLASK_DEBUG": 0}`
I tried `zappa update dev` and got back
```
ValueError("The following environment variables are not strings: {}".format(", ".join(non_strings)))
ValueError: The following environment variables are not strings: FLASK_DEBUG
```
followed by
```
print("Error: {}".format(e.message))
AttributeError: 'ValueError' object has no attribute 'message'
```
It worked fine with:
`"environment_variables": {"FLASK_DEBUG": "0"}`
## Expected Behavior
Should properly report error without crashing.
## Actual Behavior
Throws an exception about the exception.
## Possible Fix
in cli.py line 1478
replace
```raise ValueError("The following environment variables are not strings: {}".format(", ".join(non_strings)))```
with
```raise ValueError("The following environment variables are not strings: {}".format(", ".join([str(non_string) for non_string in non_strings])))```
Also, it seems like ValueError does not have a message attribute, but can be printed like so str(e)
## Your Environment
* Zappa version used: 0.45.1
* Operating System and Python version: Windows10 Python3.6
| 0easy
|
Title: docs: djoser endpoint link outdated
Body: The hyperlink referring to djoser endpoints is outdated (actually, we're using an [older version](https://github.com/desec-io/desec-stack/blob/master/api/requirements.txt#L4), but the link is pointing to `latest`). Link is defined in https://github.com/desec-io/desec-stack/blame/master/docs/endpoint-reference.rst#L5
Proposed solution: update link to https://github.com/sunscrapers/djoser/blob/1.1.5/docs/source/getting_started.rst
It appears that outdated docs are not available on readthedocs.com, but only on github.
| 0easy
|
Title: Contribute `Dot plot` to Vizro visual vocabulary
Body: ## Thank you for contributing to our visual-vocabulary! 🎨
Our visual-vocabulary is a dashboard, that serves a a comprehensive guide for selecting and creating various types of charts. It helps you decide when to use each chart type, and offers sample Python code using [Plotly](https://plotly.com/python/), and instructions for embedding these charts into a [Vizro](https://github.com/mckinsey/vizro) dashboard.
Take a look at the dashboard here: https://huggingface.co/spaces/vizro/demo-visual-vocabulary
The source code for the dashboard is here: https://github.com/mckinsey/vizro/tree/main/vizro-core/examples/visual-vocabulary
## Instructions
0. Get familiar with the dev set-up (this should be done already as part of the initial intro sessions)
1. Read through the [README](https://github.com/mckinsey/vizro/tree/main/vizro-core/examples/visual-vocabulary) of the visual vocabulary
2. Follow the steps to contribute a chart. Take a look at other examples. This [commit](https://github.com/mckinsey/vizro/pull/634/commits/417efffded2285e6cfcafac5d780834e0bdcc625) might be helpful as a reference to see which changes are required to add a chart.
3. Ensure the app is running without any issues via `hatch run example visual-vocabulary`
4. List out the resources you've used in the [README](https://github.com/mckinsey/vizro/tree/main/vizro-core/examples/visual-vocabulary)
5. Raise a PR
**Useful resources:**
- Dot plots: https://plotly.com/python/dot-plots/
- Data chart mastery: https://www.atlassian.com/data/charts/how-to-choose-data-visualization | 0easy
|
Title: [Feature request] Add apply_to_images to RandomRotate90
Body: | 0easy
|
Title: Add frontend language default parameter
Body: Hello there,
first of all: really great work!
I would love to see a parameter/config/function to set the default language for the frontend. So that you do not always have to set them by hand.
Here I would like to take over the implementation. But first I would like to clarify a few questions with you:
- does this feature make sense at all?
- Implementation via cookie in the frontend so it remembers for each user?
OR
- Add two cli arguments and integrate them into the frontend via template variables?
OR
- cli arguments and fetch in frontend via api. e.g. GET /languages/defaults -> { "source": "en", "target": "es"}
OR
- cli arguments and fetch in frontend via api. e.g. GET /frontend/settings -> { "source": "en", "target": "es", "darkmode": true}
-> here already in view for https://github.com/EsmailELBoBDev2/LibreTranslate/commit/68ce70cb78a43b94100ab3eca5ddc7fda9d49d8b
I prefer one of the last two options. But I am not sure if this is a bit overloaded, because the main feature of this project is the translation api and the frontend is more of a showcase? (So this feature should rather be included in the project which uses this API...)? | 0easy
|
Title: [DOC] Adding minimal working examples to docstrings; a checklist
Body: ## Background
This thread is borne out of the discussion from #968 , in an effort to make documentation more beginner-friendly & more understandable.
One of the subtasks mentioned in that thread was to go through the function docstrings and include a *minimal* working example to each of the public functions in pyjanitor.
Criteria reiterated here for the benefit of discussion:
> It should fit with our existing choice to go with mkdocs, mkdocstrings, and mknotebooks.
> The examples should be minimal and executable and complete execution within 5 seconds per function.
> The examples should display in rich HTML on our docs page.
> We should have an automatic way of identifying whether a function has an example provided or not so that every function has an example.
Sample of what MWE should look like is shown [here](https://github.com/pyjanitor-devs/pyjanitor/issues/968#issuecomment-1003672331).
---
I'm thinking we can create a task list so that 1. we can encourage more users to join in the effort, and 2. make sure we don't do duplicate work. A lot of the groundwork can be covered by selectively copying one or two examples over from the software test suite.
Then we can label this issue as a Help Wanted / Low-Hanging Fruit and get people to mention in this thread if they're intending to work on the files?
### Task list
- [X] functions/add_columns.py
- [x] functions/also.py
- [x] functions/bin_numeric.py
- [x] functions/case_when.py
- [x] functions/change_type.py
- [x] functions/clean_names.py
- [x] functions/coalesce.py
- [x] functions/collapse_levels.py
- [x] functions/complete.py
- [x] functions/concatenate_columns.py
- [x] functions/conditional_join.py
- [x] functions/convert_date.py
- [x] functions/count_cumulative_unique.py
- [x] functions/currency_column_to_numeric.py
- [x] functions/deconcatenate_column.py
- [x] functions/drop_constant_columns.py
- [x] functions/drop_duplicate_columns.py
- [x] functions/dropnotnull.py
- [x] functions/encode_categorical.py
- [x] functions/expand_column.py
- [x] functions/expand_grid.py
- [x] functions/factorize_columns.py
- [x] functions/fill.py
- [x] functions/filter.py
- [x] functions/find_replace.py
- [x] functions/flag_nulls.py
- [x] functions/get_dupes.py
- [x] functions/groupby_agg.py
- [x] functions/groupby_topk.py
- [x] functions/impute.py
- [x] functions/jitter.py
- [x] functions/join_apply.py
- [x] functions/label_encode.py
- [x] functions/limit_column_characters.py
- [x] functions/min_max_scale.py
- [x] functions/move.py
- [x] functions/pivot.py
- [x] functions/process_text.py
- [x] functions/remove_columns.py
- [x] functions/remove_empty.py
- [x] functions/rename_columns.py
- [x] functions/reorder_columns.py
- [x] functions/round_to_fraction.py
- [x] functions/row_to_names.py
- [x] functions/select_columns.py
- [x] functions/shuffle.py
- [x] functions/sort_column_value_order.py
- [x] functions/sort_naturally.py
- [x] functions/take_first.py
- [x] functions/then.py
- [x] functions/to_datetime.py
- [x] functions/toset.py
- [x] functions/transform_columns.py
- [x] functions/truncate_datetime.py
- [x] functions/update_where.py
- [ ] spark/backend.py
- [ ] spark/functions.py
- [x] xarray/functions.py
- [x] biology.py
- [x] chemistry.py
- [x] engineering.py
- [ ] errors.py
- [x] finance.py
- [x] io.py
- [x] math.py
- [x] ml.py
- [x] timeseries.py
B | 0easy
|
Title: Expose cpu limit
Body: We allow limiting the memory used by builds. We should allow limiting cpu as well. The simplest would be to only expose the `--cpus` arg, but docker has [several](https://docs.docker.com/config/containers/resource_constraints/#cpu) ways to tune cpu throttling. | 0easy
|
Title: VOT19 json data format
Body: Hi, when I try to run VOT2019 benchmark using pysot toolkit, I found that there might be some problems in VOT2019.json.
The paths in json file are like "**img_names**": ["agility/00000001.jpg",...]
but it should be "**img_names**": ["agility/color/00000001.jpg",...].
| 0easy
|
Title: Display word count
Body: ## Feature Request
Display the number of words in the current document. | 0easy
|
Title: [BUG] ReadTheDocs formatting broken
Body: Ex: https://pygraphistry.readthedocs.io/en/latest/graphistry.html#graphistry.plotter.Plotter.gsql
Some of the examples break the ReadTheDocs formatting, would be useful to audit & fix | 0easy
|
Title: The Default Working Directory is `/usr/home/$USER`
Body: <!--- Provide a general summary of the issue in the Title above -->
<!--- If you have a question along the lines of "How do I do this Bash command in xonsh"
please first look over the Bash to Xonsh translation guide: https://xon.sh/bash_to_xsh.html
If you don't find an answer there, please do open an issue! -->
## xonfig
<details>
```
+------------------+----------------------------+
| xonsh | 0.11.0 |
| Git SHA | 337cf25a |
| Commit Date | Nov 17 15:37:41 2021 |
| Python | 3.9.18 |
| PLY | 3.11 |
| have readline | True |
| prompt toolkit | 3.0.39 |
| shell type | prompt_toolkit |
| history backend | json |
| pygments | 2.16.1 |
| on posix | True |
| on linux | False |
| on darwin | False |
| on windows | False |
| on cygwin | False |
| on msys2 | False |
| is superuser | False |
| default encoding | utf-8 |
| xonsh encoding | utf-8 |
| encoding errors | surrogateescape |
| on jupyter | False |
| jupyter kernel | None |
| xontrib | [] |
| RC file 1 | /home/vehementham/.xonshrc |
+------------------+----------------------------+
```
</details>
## Expected Behavior
<!--- Tell us what should happen -->
When I open Xonsh, the working directory should be `$HOME`.
## Current Behavior
<!--- Tell us what happens instead of the expected behavior -->
When I open Xonsh, the default working directory is `/usr/home/$USER`
`cd` enters `~/`
`echo $HOME` ouputs `/home/vehementham`, `vehementham` being the user.
<!--- If part of your bug report is a traceback, please first enter debug mode before triggering the error
To enter debug mode, set the environment variable `XONSH_DEBUG=1` _before_ starting `xonsh`.
On Linux and OSX, an easy way to to do this is to run `env XONSH_DEBUG=1 xonsh` -->
### Traceback (if applicable)
<details>
```
traceback
```
</details>
## Steps to Reproduce
<!--- Please try to write out a minimal reproducible snippet to trigger the bug, it will help us fix it! -->
I installed Xonsh through `pkg`. I am using FreeBSD. I set it as my default shell using `chsh`. I later added it to `/etc/shells`.
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment** | 0easy
|
Title: Add `from_context` class method to theme
Body: Add the option for a theme to be created from the current matplotlib plotting contexts' `rcparams`. | 0easy
|
Title: Display a dropdown in the filter UI for categorical filtering
Body: Hi,
I'm new to plotly dash and I've googled but I don't see a way to make a filter a dropdown akin to excel filters.
I've seen people do "hacks" but this should be the default behaviour out of the box (or at least a setting).
I'm currently evaluating plotly dash vs tools such as metabase and metabase has this out of the box.
From a UX pov, filters with a dropdown is a huge win and important for users. when looking at a new dataset, the filters hint + tell the user what values are in the column. Freeform text doesn't tell this story to the user. | 0easy
|
Title: SyntaxError occurs when dynamically modifying generator function source during runtime
Body: ### Description
When a spider callback function is modified at runtime (e.g., during development with auto-reloading tools), Scrapy's `warn_on_generator_with_return_value` check may fail due to a syntax error caused by inconsistent code parsing. This occurs because `inspect.getsource()` retrieves the updated source code, but line/indentation calculations may reference the original version, leading to incorrect code extraction and AST parsing failures.
### Steps to Reproduce
1. Create a Scrapy spider with a generator-based callback (e.g., using `yield`).
2. Run the spider.
3. During runtime, modify the callback function in a way that changes line breaks or indentation (e.g., adding/removing parentheses across lines).
4. Trigger a new request to the modified callback.
**Expected behavior:** Scrapy should handle code changes gracefully or ignore runtime modifications.
**Actual behavior:** A `SyntaxError` (e.g., `unmatched ')'`) is raised during AST parsing due to mismatched code extraction.
**Reproduces how often:** 100% when code modifications alter line structure during runtime inspection.
### Versions
```
Scrapy : 2.12.0
Python : 3.13.2
Platform : Windows-11
```
### Additional context
**Root Cause**:
The `is_generator_with_return_value()` function in `scrapy/utils/misc.py` uses `inspect.getsource()` to retrieve the current source code of a callback. If the function is modified during runtime (especially changes affecting line breaks/indentation), the parsed code snippet may be truncated incorrectly, causing AST failures.
**Minimal Reproduction (Non-Scrapy)**:
Run this script and modify the `target()` function during execution:
```python
from inspect import getsource
import time
def target():
# Edit this function during runtime
yield 1
while True:
try:
src = getsource(target)
ast.parse(src)
except Exception as e:
print(f"Failed: {e}")
time.sleep(1)
```
**Affected Code**:
The issue originates from [`scrapy/utils/misc.py` L245-L279](https://github.com/scrapy/scrapy/blob/master/scrapy/utils/misc.py#L245-L279), where dynamic source retrieval interacts poorly with runtime code changes. While `inspect.getsource()` works as designed, Scrapy's logic doesn't account for source mutations after spider initialization.
**Suggested Fixes**:
1. Cache the initial function source code during spider setup.
2. Add an option to disable generator return checks.
3. Implement robust error handling around AST parsing. | 0easy
|
Title: Clear git history to remove large file?
Body: Hi Gregor, would you like to clear git history to remove previously committed large files in .git? This is creating issues whenever someone clones the repo.
The big file seems to be: ./.git/lfs/objects/ab/32/ab32dca74aff21e80c1d05457e61204216fd50109c6c8bdd158de4231c3cdaf0
@TheInventorMan | 0easy
|
Title: Ignore stdlib logs from urllib3
Body: https://pydanticlogfire.slack.com/archives/C06EDRBSAH3/p1713885605381719
We can specifically ignore the message `https://logfire-api.pydantic.dev:443 "POST /v1/traces HTTP/1.1" 200 2` (and same for `/v1/metrics`) so that the user doesn't have to silence all of urllib3 to avoid this noise. | 0easy
|
Title: if experiment name is too long the suggestion service can't start
Body: ### What happened?
I provided an experiment name that was 57 characters long.
It got stuck waiting for trials to be created because the suggestion service couldn't be started because the name was more than 63 characters long.
### What did you expect to happen?
Katib to pick a valid name for the service.
### Environment
Kubernetes version:
```bash
$ kubectl version
Client Version: v1.29.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.3
```
Katib controller version:
```bash
$ kubectl get pods -n kubeflow -l katib.kubeflow.org/component=controller -o jsonpath="{.items[*].spec.containers[*].image}"
docker.io/kubeflowkatib/katib-controller:v0.17.0
```
Katib Python SDK version:
```bash
$ pip show kubeflow-katib
Name: kubeflow-katib
Version: 0.17.0
```
### Impacted by this bug?
Give it a 👍 We prioritize the issues with most 👍 | 0easy
|
Title: dotted paths to enable dynamic params
Body: when done, update this: https://github.com/ploomber/projects/tree/master/cookbook/dynamic-params | 0easy
|
Title: Project Velocity metric API
Body: The canonical definition is here: https://chaoss.community/?p=3572 | 0easy
|
Title: Consider adding caching to the tests so they don't need to run exhaustively
Body: For example, with the pytest-cache extension. | 0easy
|
Title: Unintuitive deduction of dimension type
Body: ```python
optimizer = Optimizer([(0, 1.124124)], base_estimator, n_random_starts=5)
```
to me it is wrong that this would create an `Integer` dimension. We should check if both or any of the limits is a float and then create a `Continous` dimension. | 0easy
|
Title: Update websockets implementation for 3.11
Body: To make the websocket implementation 3.11 compat, we need to remove the deprecated usage of `asyncio.wait` here: https://github.com/sanic-org/sanic/blob/4a416e177aa5037ba9436e53f531631707e87ea7/sanic/server/websockets/impl.py#L521
---
thank you both
@ahopkins - We also get this warning in the error logs when using websockets:
DeprecationWarning: The explicit passing of coroutine objects to asyncio.wait() is deprecated since Python 3.8, and scheduled for removal in Python 3.11. done, pending = await asyncio.wait(
Is this from the websockets package?
_Originally posted by @vgoklani in https://github.com/sanic-org/sanic/issues/2371#issuecomment-1013105803_
@vgoklani
> Is this from the websockets package?
This is separate and should probably be opened as a new issue. It should be a simple fix:
```python
self.recv_cancel = asyncio.Future()
tasks = (
self.recv_cancel,
# NEXT LINE IS THE CHANGE NEEDED
# TO EXPLICITLY CREATE THE TASK
asyncio.create_task(self.assembler.get(timeout)),
)
done, pending = await asyncio.wait(
tasks,
return_when=asyncio.FIRST_COMPLETED,
)
```
_Originally posted by @ahopkins in https://github.com/sanic-org/sanic/issues/2371#issuecomment-1013822496_ | 0easy
|
Title: 'early_stopping_rounds' argument is deprecated and will be removed in a future release of LightGBM.
Body: I'm getting a bunch of these in a venv after installing requirements.txt. Any tips on how to fix?
'early_stopping_rounds' argument is deprecated and will be removed in a future release of LightGBM. Pass 'early_stopping()' callback via 'callbacks' argument instead.
'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
'evals_result' argument is deprecated and will be removed in a future release of LightGBM. Pass 'record_evaluation()' callback via 'callbacks' argument instead. | 0easy
|
Title: Inactive Contributors metric API
Body: The canonical definition is here: https://chaoss.community/?p=3614 | 0easy
|
Title: Update documentation with custom binary path information
Body: ie: Update https://splinter.readthedocs.io/en/latest/drivers/chrome.html with
```
from selenium import webdriver
chrome_options = webdriver.chrome.options.Options()
chrome_options.binary_location = "/path/to/canary"
browser = Browser('chrome', options=chrome_options)
```
but for Firefox as well. | 0easy
|
Title: this repo is too necessary / too good
Body: eom | 0easy
|
Title: Add Default of None for selected_options for ViewStateValue
Body: Having a default of None will allow for consistency within the `ViewStateValue` object. All other attributes default to `None`, but `selected_options` does not get set if the input is an empty list (ex Checkbox). This results in an `AttributeError`.
### Reproducible in:
#### The Slack SDK version
`slack-bolt==1.4.4`
#### Python runtime version
`Python 3.8.3`
#### OS info
```
OS info
ProductName: Mac OS X
ProductVersion: 10.15.7
BuildVersion: 19H524
Darwin Kernel Version 19.6.0: Tue Jan 12 22:13:05 PST 2021; root:xnu-6153.141.16~1/RELEASE_X86_64
```
#### Steps to reproduce:
(Share the commands to run, source code, and project settings (e.g., setup.py))
1. Instantiate a `ViewStateValue` with `selected_options` as an empty list
2. See below
### Expected result:
`selected_options` should default to None
### Actual result:
```
>>> d = {"selected_options": []}
>>> v = ViewStateValue(**d)
>>> v
<slack_sdk.ViewStateValue>
>>> v.selected_channel is None
True
>>> v.selected_options is None
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'ViewStateValue' object has no attribute 'selected_options'
```
### Requirements
For general questions/issues about Slack API platform or its server-side, could you submit questions at https://my.slack.com/help/requests/new instead. :bow:
Please read the [Contributing guidelines](https://github.com/slackapi/python-slack-sdk/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| 0easy
|
Title: Request for True Momentum Oscillator (TMO)
Body: First of all thank you for creating this project. It has been a great help. I wanted to request the TMO indicator as it has helped me a lot in my own personal trading. I just started to learn python to backtest my trading strategies, so my knowledge is at a beginner level. I tried using chatgpt to convert the pinescript to python however that wasn't much help. If this could be added in that would be awesome. Thanks!
```c
[https://www.tradingview.com/script/o9BQyaA4-True-Momentum-Oscillator/]
Pinescript:
// @version=4
// This source code is subject to the terms of the Mozilla Public License 2.0 at https://mozilla.org/MPL/2.0/
// True Momentum Oscillator
// by SparkyFlary
study(title="True Momentum Oscillator", overlay=false)
length = input(14, title="Length")
calcLength = input(5, title="Calc length")
smoothLength = input(3, title="Smooth length")
lengthType = input("EMA", title="Length moving average type", options=["EMA", "SMA", "RMA"])
calcLengthType = input("EMA", title="Calc length moving average type", options=["EMA", "SMA", "RMA"])
smoothLengthType = input("EMA", title="Smooth length moving average type", options=["EMA", "SMA", "RMA"])
//function for choosing moving averages
f_ma(type, src, len) =>
float result = 0
if type == "EMA"
result := ema(src, len)
if type == "SMA"
result := sma(src, len)
if type == "RMA"
result := rma(src, len)
result
o = open
c = close
s = 0
for i = 0 to length
s := s + (c > o[i] ? 1 : c < o[i] ? - 1 : 0)
data = s
MA = f_ma(lengthType, data, calcLength)
Main = f_ma(calcLengthType, MA, smoothLength)
Signal = f_ma(smoothLengthType, Main, smoothLength)
ob = hline(round(length * .7), title="overbought cutoff line", color=color.gray, linestyle=hline.style_solid)
os = hline(-round(length * .7), title="oversold cutoff line", color=color.gray, linestyle=hline.style_solid)
upper = hline(length, title="upper line", color=color.red, linestyle=hline.style_solid)
lower = hline(-length, title="lower line", color=color.green, linestyle=hline.style_solid)
zero = hline(0, title="zero line", color=color.gray, linestyle=hline.style_solid)
mainPlot = plot(Main, title="main line", color=Main>Signal?color.green:color.red, linewidth=2)
signalPlot = plot(Signal, title="signal line", color=Main>Signal?color.green:color.red)
crossPlot = plot(cross(Main,Signal)?Main:na, title="crossover dot", color=Main>Signal?color.green:color.red, style=plot.style_circles, linewidth=3)
fill(mainPlot, signalPlot, title="main and signal area", color=Main>Signal?color.green:color.red)
fill(ob, upper, title="overbought zone", color=color.red)
fill(os, lower, title="oversold zone", color=color.green)
``` | 0easy
|
Title: Docker build fails on v10.0.0
Body: Docker fails on version v10.0.0 and above.
This line is the first offender
`RUN pip install torch==1.9.1+cpu torchvision==0.10.1+cpu -f https://download.pytorch.org/whl/torch_stable.html`
It seems it's unable to find the package when being run inside docker:
```
#27 CANCELED
------
> [linux/arm64 builder 6/17] RUN pip install torch==1.9.1+cpu torchvision==0.10.1+cpu -f https://download.pytorch.org/whl/torch_stable.html:
#0 7.153 Looking in links: https://download.pytorch.org/whl/torch_stable.html
#29 15.44 ERROR: Could not find a version that satisfies the requirement torch==1.9.1+cpu (from versions: 1.8.0, 1.8.1, 1.9.0, 1.10.0, 1.10.1, 1.10.2, 1.11.0, 1.12.0, 1.12.1, 1.13.0, 1.13.1)
#29 15.44 ERROR: No matching distribution found for torch==1.9.1+cpu
```
That install command works normally on windows or on a ubuntu server, so I'm not too sure what's happening here with this. I'm certain I'm missing something. | 0easy
|
Title: Add `include_columns` option to `create_pydantic_model`
Body: `create_pydantic_model` has an `exclude_columns` argument, but it would be very useful to also have an `include_columns` argument.
If neither are specified, all columns are returned. If both are specified, then an exception is raised.
```python
class Band(Table):
name = Varchar()
popularity = Integer()
# Imagine it has lots more columns ...
model = create_pydantic_model(include_columns=(Band.name,))
``` | 0easy
|
Title: Refactoring procs/pipelines.py
Body: This is metaissue for list of tasks on refactoring.
Refactoring procs/pipelines.py:
* Rename `CommandPipeline.proc` to `last_proc` to represent the actual thing that code is really doing.
* Show the whole pipeline in `CommandPipeline.__repr__` instead of last proc.
* Move all logic about decisions to specs build. It's also needed to show right `$XONSH_TRACE_SUBPROC=2`.
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| 0easy
|
Title: Feature request: option to remove 'E', 'W', 'N', 'S' from axis labels
Body: ### Description
<!-- Please provide a general introduction to the issue/proposal. -->
I would like to be able to format my axis labels so that there is no N/S/E/W after the label.
Is this possible? It seems like a simple option could be added to the Longitude/Latitude formatters.
In Basemap there was an option for labelstyle="+/-" that could do exactly this.
<!-- | 0easy
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.