text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: Catch NoBackendError exception and tell user to install ffmpeg
Body: Despite instructions to install ffmpeg in README.md (#414), people still skip it and get audioread's NoBackendError whenever trying to load mp3s. The exception is confusing to the end user and we get a lot of questions about it.
* The error message looks like this: https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/386#issue-646643932
* What people need to do to resolve it is this: https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/431#issuecomment-660647785
Here are some options for dealing with it, sorted in ascending order of preference.
### Option 1: Catch it when it occurs
This would make a nice starter issue for someone who is just getting started with python:
In demo_cli.py and demo_toolbox.py,
1. Use try/except around librosa.load()
* [Lines 132-138 of demo_cli.py](https://github.com/CorentinJ/Real-Time-Voice-Cloning/blob/054f16ecc186d8d4fa280a890a67418e6b9667a8/demo_cli.py#L132-L138)
* [Lines 153-156 of toolbox/\_\_init\_\_.py](https://github.com/CorentinJ/Real-Time-Voice-Cloning/blob/054f16ecc186d8d4fa280a890a67418e6b9667a8/toolbox/__init__.py#L153-L156)
2. Catch the NoBackendError exception (this will require you to import NoBackendError from audioread.exceptions)
3. Print a helpful error message instructing the user to install ffmpeg for mp3 support.
### Option 2: Test for mp3 support as toolbox is loaded
This is a little more advanced, but I think this is preferred.
1. As the toolbox is opened, or when demo_cli.py is started, we either use librosa.load() or audioread.audio_open() on a sample mp3 file to test the capability.
2. If we catch NoBackendError we nag the user and tell them to either install ffmpeg, or rerun the toolbox with `--no_mp3_support`
3. The `--no_mp3_support` flag disallows loading of mp3s when one is selected, preventing NoBackendError from being encountered.
### Option 3: Check for ffmpeg backend as toolbox is loaded
Although other backends are able to open mp3, we can force a check for Windows users to see if they followed the instructions and installed ffmpeg. Then do the same nag as option 2 if not detected.
audioread.ffdec.available() can test for ffmpeg availability: [audioread/ffdec.py](https://github.com/beetbox/audioread/blob/master/audioread/ffdec.py)
Take a look at [audioread/\_\_init\_\_.py](https://github.com/beetbox/audioread/blob/master/audioread/__init__.py) and see how it raises NoBackendError. The code is extremely simple.
### Option 4: Contribute to audioread to make the error message more descriptive
https://github.com/beetbox/audioread/issues/104 | 0easy
|
Title: [core] Fix mock dependency
Body: ### What happened + What you expected to happen
Ray core's mock object dependency is a mess, one concrete example:
https://github.com/ray-project/ray/blob/master/src/mock/ray/raylet/agent_manager.h
It should have the dependency of `AgentManager` and `DefaultAgentManagerServiceHandler`, but somehow it doesn't...
As a result, we have to include `class_a.h` before `mock_class_a.h` and avoid linter reorder via `clang-format off`, otherwise build will break.
A concrete example:
https://github.com/ray-project/ray/blob/051798b384e1e1d0ae9aa3d17d8a8fcf8d69fc74/src/ray/gcs/gcs_server/test/gcs_worker_manager_test.cc#L17-L26
This issue could be split into multiple sub-issues:
- [x] (small) https://github.com/ray-project/ray/tree/051798b384e1e1d0ae9aa3d17d8a8fcf8d69fc74/src/mock/ray/common/ray_syncer
- [ ] https://github.com/ray-project/ray/tree/051798b384e1e1d0ae9aa3d17d8a8fcf8d69fc74/src/mock/ray/core_worker
- [ ] (small) https://github.com/ray-project/ray/tree/051798b384e1e1d0ae9aa3d17d8a8fcf8d69fc74/src/mock/ray/gcs/gcs_client
- [ ] https://github.com/ray-project/ray/tree/051798b384e1e1d0ae9aa3d17d8a8fcf8d69fc74/src/mock/ray/gcs/gcs_server
- [ ] (small) (https://github.com/ray-project/ray/tree/051798b384e1e1d0ae9aa3d17d8a8fcf8d69fc74/src/mock/ray/gcs/pubsub)
- [ ] (small) (https://github.com/ray-project/ray/tree/051798b384e1e1d0ae9aa3d17d8a8fcf8d69fc74/src/mock/ray/gcs/store_client)
- [ ] (small) (https://github.com/ray-project/ray/tree/051798b384e1e1d0ae9aa3d17d8a8fcf8d69fc74/src/mock/ray/pubsub)
- [ ] https://github.com/ray-project/ray/tree/051798b384e1e1d0ae9aa3d17d8a8fcf8d69fc74/src/mock/ray/raylet
- [ ] (small) https://github.com/ray-project/ray/tree/051798b384e1e1d0ae9aa3d17d8a8fcf8d69fc74/src/mock/ray/raylet_client
- [ ] (small) https://github.com/ray-project/ray/tree/051798b384e1e1d0ae9aa3d17d8a8fcf8d69fc74/src/mock/ray/rpc/worker
**Note:
For large folders, you don't need to do everything in one PR.**
Steps to take:
1. Create a subissue, which links back to this main issue
2. Pick one items you're interested in, properly add dependency (header file and bazel build file)
3. Update use cases in unit tests and remove `clang-format off` mark
### Versions / Dependencies
N/A
### Reproduction script
N/A
### Issue Severity
None | 0easy
|
Title: Do not exclude files during parsing if using `--suite` option
Body: Currently when the `--suite` option is used, files not matching the specified suite aren't parsed at all. This is a useful performance optimization, but it doesn't work well with the new `Name` setting (#4583) that allows configuring the suite name in the parsed file itself. In addition to that, suites not being parsed and not thus not being available for pre-run modifiers can cause surprises. To avoid all these issues, it is better to not use `--suite` for limiting what files are parsed at all.
This change isn't functionally backwards incompatible, but it obviously affects those who `--suite` to make parsing faster. A precondition to such a change is having an explicit way to limit what files are parsed (#4687). | 0easy
|
Title: [Documentation] Doc on mapping from torchaudio to Albumentations
Body: Was told that noone is using Albumentation in the audio community, although all transforms from torchaudio exist in Albumentations, although may have different names.
Need document / blog post. | 0easy
|
Title: Don't log warning for "Couldn't load adapter" if adapter isn't specified
Body: Currently I get a bunch of warnings like:
```
Couldn't load adapter datasetteapi = shillelagh.adapters.api.datasette:DatasetteAPI
```
even though I am explicitly passing in a list of adapters and not specifying that adapter.
These warnings should be printed only if the adapter is in the `adapters` list / there is no list:
https://github.com/betodealmeida/shillelagh/blob/a427de0b2d1ac27402d70b8a2ae69468f1f3dcad/src/shillelagh/backends/apsw/db.py#L510-L511 | 0easy
|
Title: missing documentation: Estimator Transformer
Body: The `EstimatorTransformer` is complicated enough to add an .rst document for. Might be nice to check if we can automatically test this as well. | 0easy
|
Title: test_batch_path_differ sometimes fails
Body: See https://github.com/scrapy/scrapy/pull/5847#issuecomment-1471778039. | 0easy
|
Title: Make JSONHandler customization docs clearer
Body: As pointed out by @Stargateur in https://github.com/falconry/falcon/issues/1906#issuecomment-817374057, our [`JSONHandler`](https://falcon.readthedocs.io/en/stable/api/media.html#falcon.media.JSONHandler) customization docs could be made clearer by separately illustrating different (albeit closely related) concepts:
* Use a custom JSON library (such as the exemplified `rapidjson`). Customize parameters.
* Use the stdlib's `json` module, just provide custom serialization or deserialization parameters. Also link to the ["Prettifying JSON Responses" recipe](https://falcon.readthedocs.io/en/stable/user/recipes/pretty-json.html), which illustrates customization of `dumps` parameters.
* Add a sentence or two about replacing the default JSON handlers, not just toss in a code snippet as it is at the time of writing this. Also link to [Replacing the Default Handlers](https://falcon.readthedocs.io/en/stable/api/media.html#custom-media-handlers) from that explanation. | 0easy
|
Title: Bug: incorrect parsing of path parameters with nested routers
Body: **Describe the bug**
When using nested routers with path parameters, the values are parsed incorrectly. Specifically, when passing a valid enum value in the subject, only the last character of the path parameter is taken.
**How to reproduce**
```python
from enum import StrEnum
from typing import Annotated, Any
from faststream import FastStream, Path
from faststream.nats import NatsBroker, NatsRouter
class MyEnum(StrEnum):
FIRST = "first"
SECOND = "second"
THIRD = "third"
broker = NatsBroker()
root_router = NatsRouter(prefix="root_router.")
nested_router = NatsRouter()
@nested_router.subscriber("{my_enum}.nested_router")
async def do_nothing(message: Any, my_enum: Annotated[MyEnum, Path()]): ...
root_router.include_router(nested_router)
broker.include_router(nested_router)
app = FastStream(broker)
@app.after_startup
async def run():
await broker.publish("", f"root_router.{MyEnum.THIRD}.nested_router")
```
**Expected behavior**
The my_enum path parameter should be correctly parsed, matching the full enum value (e.g., “third”).
**Observed behavior**
```
pydantic_core._pydantic_core.ValidationError: 1 validation error for do_nothing
my_enum
Input should be 'first', 'second' or 'third' [type=enum, input_value='d', input_type=str]
For further information visit https://errors.pydantic.dev/2.8/v/enum
```
| 0easy
|
Title: Add template to Overleaf
Body: To help people manually edit their resumes | 0easy
|
Title: Implement consistancy checks for unitary protocol in the presence of ancillas
Body: **Is your feature request related to a use case or problem? Please describe.**
#6101 was created to update `cirq.unitary` and `cirq.apply_unitaries` protocols to support the case when gates allocate their own ancillas. This was achieved in #6112, however the fix assumes the decomposition is correct. a consistency check is need to check that. This is the fourth task on https://github.com/quantumlib/Cirq/issues/6101#issuecomment-1568686661.
The consistency check should check that the result is
1. Indeed a unitary
2. CleanQubits are restored to the $\ket{0}$ state.
3. Borrowable Qubits are restored to their original state.
**Describe the solution you'd like**
for the correctness checks we need a `cirq.testing.assert_consistent_unitary` that does the checks listed above.
**What is the urgency from your perspective for this issue? Is it blocking important work?**
for the first task
P1 - I need this no later than the next release (end of quarter)
for the second and third tasks
P3 - I'm not really blocked by it, it is an idea I'd like to discuss / suggestion based on principle | 0easy
|
Title: [new]: `xml2json(xml)`
Body: ### Check the idea has not already been suggested
- [X] I could not find my idea in [existing issues](https://github.com/unytics/bigfunctions/issues?q=is%3Aissue+is%3Aopen+label%3Anew-bigfunction)
### Edit the title above with self-explanatory function name and argument names
- [X] The function name and the argument names I entered in the title above seems self explanatory to me.
### BigFunction Description as it would appear in the documentation
convert xml to json string using a js library
Inspired from https://github.com/salrashid123/bq-udf-xml
### Examples of (arguments, expected output) as they would appear in the documentation
`<a><b>foo</b></a>` --> `{"a": {"b": "foo"}}` | 0easy
|
Title: “Ask for feedback” step.
Body: Create a step that asks “did it run/work/perfect”?, and store to memory folder.
And let the benchmark.py script check that result, and convert it to a markdown table like benchmark/RESULTS.md , and append it with some metadata to that file. | 0easy
|
Title: Unknown interaction errors
Body: Sometimes there are unknown interaction errors with a response in /gpt converse takes too long to return, or if the user deleted their original message it is responding to before it responds | 0easy
|
Title: [Tech debt] Improve Interface for RandomFog
Body: Right now in the transform we have separate parameters for `fog_coef_upper` and `fog_coef_upper`
Better would be to have one parameter `fog_coef_range = [fog_coef_lower, fog_coef_upper]`
=>
We can update transform to use new signature, keep old as working, but mark as deprecated.
----
PR could be similar to https://github.com/albumentations-team/albumentations/pull/1704 | 0easy
|
Title: fix: typing failures with modern typed libraries
Body: We limit traitlets and matplotlib to avoid lint failures in https://github.com/widgetti/solara/pull/305
We should fix those errors, and unpin the installations in CI. | 0easy
|
Title: /assignments_demo/assignment04_habr_popularity_ridge.ipynb - Опечатка в тексте задания
Body: "Инициализируйте DictVectorizer с параметрами по умолчанию.
Примените метод fit_transform к X_train['title'] и метод transform к X_valid['title'] и X_test['title']"
Скорее всего здесь опечатка: должно быть X_train[feats], X_valid[feats], X_test[feats] | 0easy
|
Title: VWAP Bands request like IB
Body: Which version are you running? The latest version is on Github. Pip is for major releases.
```
import pandas_ta as ta
print(ta.version)
0.3.64b0
```
**Indicator addition to match another broker**
Hi,
So kind of new around here, and I've been looking for indicator to match certain behavior of VWAP standard deviation bands like the bands that appears in Interactive brokers management portal .
I've been examining the indicator that has been recently added here [https://github.com/twopirllc/pandas-ta/pull/488](url)
and his behavior and results is pretty different from IB's one.
(BTW, while searching for this topic all around it seems that most of the formulas and libs are getting the same results as your indicator).
In the example attached, I pulled the data of a stock in 5 mins candles, calculated the VWAP and bands using your indicator, and got the same VWAP value, but very different bands values. I've been looking for 8 and 11 bands in the IB terminology and definition (example of the setting attached) but the results I'm getting are very different and it seems that the "issue" is with the calculation over time of the bands (again, pretty new to all of this).
I will appreciate help and will provide any data as I can!
Thanks!
DataFrame data:
[wfc_five_minutes_candles.zip](https://github.com/twopirllc/pandas-ta/files/8701973/wfc_five_minutes_candles.zip)
My code and plot:
```python
df = pd.read_json(json.dumps(data))
df.sort_values(by="datetime", inplace=True)
df.set_index(pd.DatetimeIndex(df["datetime"]), inplace=True)
vwap = df.ta.vwap(anchor="D", bands=[8, 11], offset=None, append=False)
last = 100
adp = mpf.make_addplot(vwap.tail(last), type='line')
mpf.plot(df, figratio=(10, 5), type='candle', addplot=adp, volume=True, style='yahoo')
```
plot:

Interactive indicator setup screen and values:

interactive plotting:

| 0easy
|
Title: corpora.TextDirectoryCorpus fails on utf-8 encoded files on windows
Body: #### Problem description
What are you trying to achieve? What is the expected result? What are you seeing instead?
I have a directory of utf-8 encoded files (scraped Reddit submissions selftext i.e. text in reddit posts) in plain text. I wanted to create a corpus using gensim.corpora.TextDirectoryCorpus(<dir_of_scraped_plaintexts>). I expect this to run without error and return a working corpus. I see a UnicodeDecodeError instead: (not to be confused with the UnicodeDecodeError in the [FAQ Q10](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ#q10-loading-a-word2vec-model-fails-with-unicodedecodeerror-utf-8-codec-cant-decode-bytes-in-position-)
<details> <summary>Stack Trace</summary>
---------------------------------------------------------------------------
UnicodeDecodeError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_13728\103931668.py in <module>
1 # load all selftext into gensim
2 all_selftext_dir = Path.cwd() / 'data/all_selftexts'
----> 3 corpus = gensim.corpora.TextDirectoryCorpus(str(all_selftext_dir))
D:\Work\Anaconda\envs\cs37\lib\site-packages\gensim\corpora\textcorpus.py in __init__(self, input, dictionary, metadata, min_depth, max_depth, pattern, exclude_pattern, lines_are_documents, **kwargs)
433 self.exclude_pattern = exclude_pattern
434 self.lines_are_documents = lines_are_documents
--> 435 super(TextDirectoryCorpus, self).__init__(input, dictionary, metadata, **kwargs)
436
437 @property
D:\Work\Anaconda\envs\cs37\lib\site-packages\gensim\corpora\textcorpus.py in __init__(self, input, dictionary, metadata, character_filters, tokenizer, token_filters)
181 self.length = None
182 self.dictionary = None
--> 183 self.init_dictionary(dictionary)
184
185 def init_dictionary(self, dictionary):
D:\Work\Anaconda\envs\cs37\lib\site-packages\gensim\corpora\textcorpus.py in init_dictionary(self, dictionary)
203 metadata_setting = self.metadata
204 self.metadata = False
--> 205 self.dictionary.add_documents(self.get_texts())
206 self.metadata = metadata_setting
207 else:
D:\Work\Anaconda\envs\cs37\lib\site-packages\gensim\corpora\dictionary.py in add_documents(self, documents, prune_at)
192
193 """
--> 194 for docno, document in enumerate(documents):
195 # log progress & run a regular check for pruning, once every 10k docs
196 if docno % 10000 == 0:
D:\Work\Anaconda\envs\cs37\lib\site-packages\gensim\corpora\textcorpus.py in get_texts(self)
312 yield self.preprocess_text(line), (lineno,)
313 else:
--> 314 for line in lines:
315 yield self.preprocess_text(line)
316
D:\Work\Anaconda\envs\cs37\lib\site-packages\gensim\corpora\textcorpus.py in getstream(self)
521 except Exception as e:
522 print(path)
--> 523 raise e
524 num_texts += 1
525
D:\Work\Anaconda\envs\cs37\lib\site-packages\gensim\corpora\textcorpus.py in getstream(self)
518 else:
519 try:
--> 520 yield f.read().strip()
521 except Exception as e:
522 print(path)
D:\Work\Anaconda\envs\cs37\lib\encodings\cp1252.py in decode(self, input, final)
21 class IncrementalDecoder(codecs.IncrementalDecoder):
22 def decode(self, input, final=False):
---> 23 return codecs.charmap_decode(input,self.errors,decoding_table)[0]
24
25 class StreamWriter(Codec,codecs.StreamWriter):
UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 1897: character maps to <undefined>
</details>
#### Steps/code/corpus to reproduce
**Platform Specific**: this error is reproducible on platforms where ```locale.gerpreferredencoding() == 'cp1252'``` i.e. it is reproducible only on some _Windows_ machines.
Consider this file: [encoding_err_txt.txt](https://github.com/RaRe-Technologies/gensim/files/8396053/encoding_err_txt.txt)
Place the above file in an empty directory, then run:
```
gensim.corpora.TextDirectoryCorpus(<path_to_dir>)
```
#### Versions
```
>>> import platform; print(platform.platform())
Windows-10-10.0.19041-SP0
>>> import sys; print("Python", sys.version)
Python 3.7.10 (default, Feb 26 2021, 13:06:18) [MSC v.1916 64 bit (AMD64)]
>>> import struct; print("Bits", 8 * struct.calcsize("P"))
Bits 64
>>> import numpy; print("NumPy", numpy.__version__)
NumPy 1.21.5
>>> import scipy; print("SciPy", scipy.__version__)
SciPy 1.7.3
>>> import gensim; print("gensim", gensim.__version__)
gensim 4.1.2
>>> from gensim.models import word2vec;print("FAST_VERSION", word2vec.FAST_VERSION)
FAST_VERSION 1
```
#### Additional Note
This issue seems to be caused by [gensim.corpora.textcorpus.py:513](https://github.com/RaRe-Technologies/gensim/blob/4c941b454a86bdcead2cb1a174e6ec7158253e29/gensim/corpora/textcorpus.py#L513) where TextDirectoryCorpus.getstream() uses the python builtin _open()_ without specifying an _encoding=_ argument. This lets python defaults to using
``` python
locale.getpreferredencoding(False)
```
for an encoding to read the file. Unfortunately, on the aforementioned platform, the above line returns _cp1252_ which cannot decode some of the _utf-8_ characters.
##### Workarounds
_python 3.7 and later_: added [a UTF-8 mode](https://peps.python.org/pep-0540/) where python reads the environment variable "PYTHONUTF8" and sets the _sys.flags.utf8_mode_. If _sys.flags.utf8_mode == 1_, then _locale.getpreferredencoding(False) == "UTF-8"_ and TextDirectoryCorpus is able to load the file.
I spent a few hours tinkering around and reading up on some resources (example: [python's utf-8 mode](https://dev.to/methane/python-use-utf-8-mode-on-windows-212i), [changing locale to change the preferred encoding, did not work](https://stackoverflow.com/questions/27437325/windows-python-changing-encoding-using-the-locale-module)) before discovering the above workaround.
Overall I think this is an easy issue to fix (perhaps by adding an _encoding='utf-8'_ default keyword argument in [TextDirectoryCorpus.\_\_init\_\_(...)](https://github.com/RaRe-Technologies/gensim/blob/4c941b454a86bdcead2cb1a174e6ec7158253e29/gensim/corpora/textcorpus.py#L401) and _self.encoding_ to [gensim.corpora.textcorpus.py:513](https://github.com/RaRe-Technologies/gensim/blob/4c941b454a86bdcead2cb1a174e6ec7158253e29/gensim/corpora/textcorpus.py#L513)) and does not look like it will break anything. It should greatly increase the usability of TextDirectoryCorpus on Windows platforms.
Thanks ☺️ | 0easy
|
Title: Remove RRset write-support for locked users
Body: Currently, locked users can write RRsets to the application database. Only upon unlocking the user account, these RRsets will be propagated to pdns.
This leads to various difficulties regarding validation and consistency, see https://github.com/PowerDNS/pdns/issues/7565#issuecomment-476397025
As the benefit of the feature is questionable, nobody will miss it if it's not there, and it even may lead to confusion ("My PATCH was successful, but changes don't get propagated?!"), it is better to remove the feature. | 0easy
|
Title: `BuiltIn.Log` documentation contains a defect
Body: This is rather a typo more than a bug, but in the very last line of docstring for `BuiltIn.Log` Keyword, there is an annotation that one can choose between `type` and `log` formatter options.
Looking at `_get_formatter` method ( [HERE](https://github.com/robotframework/robotframework/blob/572f1e412a14c125e2098ce79e37dd45c2c0f990/src/robot/libraries/BuiltIn.py#LL3019) ), there is no such key in the dictionary. I believe that the latter option should be `len`.
Robot Framework version: 6.0.2

| 0easy
|
Title: Add pre-commit hooks, whatever you like
Body: | 0easy
|
Title: MACD is not working
Body: macd = ta.macd(data['close'], fast=12, slow=26, signal=9, talib=True, offset=None)
Output is missing | 0easy
|
Title: Documentation for customizing admin forms is out-of-date / unclear
Body: The documentation describes how to [customize the import forms](https://django-import-export.readthedocs.io/en/latest/getting_started.html#customize-admin-import-forms). The example shows how you can add an `author` field, but it doesn't make it clear what the purpose of this is, and that you have to provide any filtering capabilities yourself. This has led to some [confusion](https://stackoverflow.com/q/71161607/39296).
I propose to update this section so that it is clearer why you would want to customize the import forms, and to give an example of how to filter using a dropdown. | 0easy
|
Title: skopt.plots in 1-dimension
Body: I'm working on simple examples with optimization with respect to a single variable.
Both
```
from skopt.plots import plot_evaluations
from skopt.plots import plot_objective
```
seem to fail if I'm only optimizing wrt a single variable
```
/Users/cranmer/anaconda/lib/python3.5/site-packages/skopt/plots.py in plot_objective(result, levels, n_points, n_samples, zscale)
305 for j in range(space.n_dims):
306 if i == j:
--> 307 xi, yi = partial_dependence(space, result.models[-1], i,
308 j=None,
309 sample_points=rvs_transformed,
IndexError: list index out of range
``` | 0easy
|
Title: `AutoML::fit_ensemble` with `ensemble_size =0` causes crash
Body: It seems there is no validation on `fit_ensemble` when ensemble size is `0`, causing an issue to appear as seen in #1327 | 0easy
|
Title: Add EDA for input data set
Body: It will be nice to have Exploratory Data Analysis (EDA) similar that is in https://mljar.com

The EDA can be saved in a separate Markdown file and have link to it from the main AutoML readme. | 0easy
|
Title: docker 启动报错
Body: PytzUsageWarning: The localize method is no longer necessary, as this time zone supports the fold attribute (PEP 495). For more details on migrating to a PEP 495-compliant implementation, see https://pytz-deprecation-shim.readthedocs.io/en/latest/migration.html
return self.timezone.localize(datetime(**values))
命令:`docker run -d -it -p 80:8080 --name pay baiyuetribe/kamifaka`
服务器为海外服务器
| 0easy
|
Title: Add `expand` attribute to SectionBlock
Body: The [API docs list the attribute `expand`](https://api.slack.com/reference/block-kit/blocks#section), which is currently missing from the SDK version in SectionBlock class.
### Category (place an `x` in each of the `[ ]`)
- [ ] **slack_sdk.web.WebClient (sync/async)** (Web API client)
- [ ] **slack_sdk.webhook.WebhookClient (sync/async)** (Incoming Webhook, response_url sender)
- [X] **slack_sdk.models** (UI component builders)
- [ ] **slack_sdk.oauth** (OAuth Flow Utilities)
- [ ] **slack_sdk.socket_mode** (Socket Mode client)
- [ ] **slack_sdk.audit_logs** (Audit Logs API client)
- [ ] **slack_sdk.scim** (SCIM API client)
- [ ] **slack_sdk.rtm** (RTM client)
- [ ] **slack_sdk.signature** (Request Signature Verifier)
### Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/python-slack-sdk/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| 0easy
|
Title: SpinBox _updateHeight has a type error
Body: PyQtGraph v0.11.0 and v0.12.3, in SpinBox.py line 577 should be
```
self.setMaximumHeight(int(1e6))
```
not
```
self.setMaximumHeight(1e6)
```
The existing line causes a type error when we set `opts['compactHeight'] = False`, because `1e6` is a float, not an int.
Currently my workaround is to keep `opts['compactHeight'] = True` but then I have to manually set the height of the number box to something more reasonable or else it looks janky and squished (a bug that was submitted some years ago).
Thanks so much! | 0easy
|
Title: Connect Oriented Bounding Box to Metrics
Body: # Connect Oriented Bounding Box to Metrics
> [!TIP]
> [Hacktoberfest](https://hacktoberfest.com/) is calling! Whether it's your first PR or your 50th, you’re helping shape the future of open source. Help us build the most reliable and user-friendly computer vision library out there! 🌱
---
Several new features were recently added to supervision:
* Mean Average Precision (mAP)
* F1 Score
* IoU calculation for Oriented Bounding Boxes
Intersection Over Union (IoU) is the starting point when computing these metrics. It determines which detections are considered true positives. However, [take a look](https://github.com/roboflow/supervision/blob/d6aa72c0f2b158b838145a81ed5995db6a1e9015/supervision/metrics/mean_average_precision.py#L176)! The Oriented Box IoU is not supported yet! Help us add support by using `oriented_box_iou_batch`.
Helpful links:
* [Contribution guide](https://supervision.roboflow.com/develop/contributing/#how-to-contribute-changes)
* Metrics:
* mAP metric: [docs](https://supervision.roboflow.com/develop/metrics/mean_average_precision/), [code](https://github.com/roboflow/supervision/blob/d6aa72c0f2b158b838145a81ed5995db6a1e9015/supervision/metrics/mean_average_precision.py#L25)
* F1 Score: [docs](https://supervision.roboflow.com/develop/metrics/f1_score/), [code](https://github.com/roboflow/supervision/blob/d6aa72c0f2b158b838145a81ed5995db6a1e9015/supervision/metrics/f1_score.py#L25)
* Oriented box IoU calculation function: [docs](https://supervision.roboflow.com/develop/detection/utils/#supervision.detection.utils.oriented_box_iou_batch), [code](https://github.com/roboflow/supervision/blob/d6aa72c0f2b158b838145a81ed5995db6a1e9015/supervision/detection/utils.py#L143)
* [Supervision Cheatsheet](https://roboflow.github.io/cheatsheet-supervision/)
* [Colab Starter Template](https://colab.research.google.com/drive/1rin7WrS-UvVIe-_Gfxmu-yVslGphOq89#scrollTo=pjmCrNre2g58)
* [Prior metrics test Colab](https://colab.research.google.com/drive/1qSMDDpImc9arTgQv-qvxlTA87KRRegYN) | 0easy
|
Title: Document which environment variables are passed through by default
Body: Documentation for tox3 had https://tox.wiki/en/3.27.0/config.html#conf-passenv
Documentation for tox4 does not show the list any more
Also see https://github.com/tox-dev/tox/blob/6b1cc141aeb9501aa23774056fbc7179b719e200/src/tox/tox_env/api.py#L179-L204 | 0easy
|
Title: [Documentation] Add explanation why you may get additional keyplints or Bounding boxes with Reflection Padding
Body: To question on why the number of keypoints or bounding boxes increased https://github.com/albumentations-team/albumentations/issues/2055
Need to add clear explanation with example to docs about the behavior of transforms with Reflection padding | 0easy
|
Title: Avoid `StreamlitDuplicateElementId` error when the same widget is in the main area and sidebar
Body: ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
Using the same widget in the main area and the sidebar results in a `StreamlitDuplicateElementId` error:
```python
import streamlit as st
st.button("Button")
st.sidebar.button("Button")
```

However, we could easily differentiate the automatically generated keys for these two elements, given that one of them is in the sidebar and the other isn't.
### Why?
Convenience, don't need to assign a key but it "just works".
### How?
_No response_
### Additional Context
_No response_ | 0easy
|
Title: [FEA] tree layouts should support a planar layout mode
Body: **Is your feature request related to a problem? Please describe.**
When plotting tree data, edges sometimes intersect. Ex:
```
g = (g
.get_degrees()
.tree_layout(
level_sort_values_by='degree',
level_sort_values_by_ascending=False,
level_align='center',
vertical=True,
ascending=True)
.settings(url_params={'bg': '%23' + '000000', 'edgeCurvature': 0.05,'edgeInfluence': 1.5,'pointSize' : 0.5})
.layout_settings(play=0, locked_x=True)
)
return g.plot(render = False,memoize = False)
```
On planar graphs (ex: trees), this should be avoidable
**Describe the solution you'd like**
The default should be something like sort-by-parent-position (`level_sort_mode='parent'`?), which can be used if `level_sort_values_by` is not set
**Describe alternatives you've considered**
* radial versions of the same
* implement / embed reingold tilford (tidier) (ex: https://hci.stanford.edu/courses/cs448b/f09/lectures/CS448B-20091021-GraphsAndTrees.pdf)
| 0easy
|
Title: Dependency Models created from Form input data are loosing metadata(field set) and are enforcing validation on default values.
Body:
### Discussed in https://github.com/fastapi/fastapi/discussions/13380
<div type='discussions-op-text'>
<sup>Originally posted by **sneakers-the-rat** February 16, 2025</sup>
### First Check
- [X] I added a very descriptive title here.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I searched the FastAPI documentation, with the integrated search.
- [X] I already searched in Google "How to X in FastAPI" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/pydantic/pydantic).
- [X] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).
- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
File: fastapi_defaults_bug.py
```python
import uvicorn
from typing import Annotated
from pydantic import BaseModel, Field
from fastapi import FastAPI, Form
class ExampleJsonModel(BaseModel):
sample_field_1: Annotated[bool, Field(default=True)]
sample_field_2: Annotated[bool, Field(default=False)]
sample_field_3: Annotated[bool, Field(default=None)]
sample_field_4: Annotated[str, Field(default=0)] # This is dangerous but can be used with a validator
class ExampleFormModel(BaseModel):
sample_field_1: Annotated[bool, Form(default=True)]
sample_field_2: Annotated[bool, Form(default=False)]
sample_field_3: Annotated[bool, Form(default=None)]
sample_field_4: Annotated[str, Form(default=0)] # This is dangerous but can be used with a validator
class ResponseSampleModel(BaseModel):
fields_set: Annotated[list, Field(default_factory=list)]
dumped_fields_no_exclude: Annotated[dict, Field(default_factory=dict)]
dumped_fields_exclude_default: Annotated[dict, Field(default_factory=dict)]
dumped_fields_exclude_unset: Annotated[dict, Field(default_factory=dict)]
app = FastAPI()
@app.post("/form")
async def form_endpoint(model: Annotated[ExampleFormModel, Form()]) -> ResponseSampleModel:
return ResponseSampleModel(
fields_set=list(model.model_fields_set),
dumped_fields_no_exclude=model.model_dump(),
dumped_fields_exclude_default=model.model_dump(exclude_defaults=True),
dumped_fields_exclude_unset=model.model_dump(exclude_unset=True)
)
@app.post("/json")
async def form_endpoint(model: ExampleJsonModel) -> ResponseSampleModel:
return ResponseSampleModel(
fields_set=list(model.model_fields_set),
dumped_fields_no_exclude=model.model_dump(),
dumped_fields_exclude_default=model.model_dump(exclude_defaults=True),
dumped_fields_exclude_unset=model.model_dump(exclude_unset=True)
)
if __name__ == "__main__":
uvicorn.run(app, host="0.0.0.0", port=8000)
```
Test File: test_fastapi_defaults_bug.py
```python
import pytest
from fastapi.testclient import TestClient
from fastapi_defaults_bug import (
app,
ExampleFormModel,
ExampleJsonModel,
ResponseSampleModel
)
@pytest.fixture(scope="module")
def fastapi_client():
with TestClient(app) as test_client:
yield test_client
################
# Section 1: Tests on Form model -> no fastapi, pydantic model
################
def test_form_model_pydantic_only_defaults():
f_model = ExampleFormModel()
for field_name, field in f_model.model_fields.items():
assert getattr(f_model, field_name) == field.default
def test_form_model_pydantic_all_unset():
f_model = ExampleFormModel()
assert not f_model.model_fields_set
def test_form_model_pydantic_set_1():
f_model = ExampleFormModel(sample_field_1=True) # Those set have the same value of default
assert "sample_field_1" in f_model.model_fields_set
assert len(f_model.model_fields_set) == 1
def test_form_model_pydantic_set_2():
f_model = ExampleFormModel(sample_field_1=True, sample_field_2=False) # Those set have the same value of default
assert "sample_field_1" in f_model.model_fields_set
assert "sample_field_2" in f_model.model_fields_set
assert len(f_model.model_fields_set) == 2
def test_form_model_pydantic_set_all():
f_model = ExampleFormModel(
sample_field_1=True,
sample_field_2=False,
sample_field_3=True,
sample_field_4=""
) # Those set could have different values from default
assert not set(f_model.model_fields).difference(f_model.model_fields_set)
################
# Section 2: Same Tests of Form on Json model -> they are the same on different model
################
def test_json_model_pydantic_only_defaults():
j_model = ExampleJsonModel()
for field_name, field in j_model.model_fields.items():
assert getattr(j_model, field_name) == field.default
def test_json_model_pydantic_all_unset():
j_model = ExampleJsonModel()
assert not j_model.model_fields_set
def test_json_model_pydantic_set_1():
j_model = ExampleJsonModel(sample_field_1=True) # Those set have the same value of default
assert "sample_field_1" in j_model.model_fields_set
assert len(j_model.model_fields_set) == 1
def test_json_model_pydantic_set_2():
j_model = ExampleJsonModel(sample_field_1=True, sample_field_2=False) # Those set have the same value of default
assert "sample_field_1" in j_model.model_fields_set
assert "sample_field_2" in j_model.model_fields_set
assert len(j_model.model_fields_set) == 2
def test_json_model_pydantic_set_all():
j_model = ExampleJsonModel(
sample_field_1=True,
sample_field_2=False,
sample_field_3=True,
sample_field_4=""
) # Those set could have different values from default
assert not set(j_model.model_fields).difference(j_model.model_fields_set)
def test_form_json_model_share_same_default_behaviour():
f_model = ExampleFormModel()
j_model = ExampleJsonModel()
for field_name, field in f_model.model_fields.items():
assert getattr(f_model, field_name) == getattr(j_model, field_name)
################
# Section 3: Tests on Form model with fastapi
################
def test_submit_form_with_all_values(fastapi_client: TestClient):
form_content = {
"sample_field_1": "False",
"sample_field_2": "True",
"sample_field_3": "False",
"sample_field_4": "It's a random string"
}
response = fastapi_client.post("/form", data=form_content)
assert response.status_code == 200
response_model = ResponseSampleModel(**response.json())
assert len(response_model.fields_set) == 4
assert not set(form_content).symmetric_difference(set(response_model.fields_set))
def test_submit_form_with_not_all_values(fastapi_client: TestClient):
"""
This test should pass but fails because fastapi is preloading default and pass those values
on model creation, losing the ability to know if a field has been set.
:param fastapi_client:
:return:
"""
form_content = {
"sample_field_1": "False",
"sample_field_3": "False",
"sample_field_4": "It's a random string"
}
response = fastapi_client.post("/form", data=form_content)
assert response.status_code == 200
response_model = ResponseSampleModel(**response.json())
assert len(response_model.fields_set) == 3 # test will fail here and below
assert not set(form_content).symmetric_difference(set(response_model.fields_set))
def test_submit_form_with_no_values(fastapi_client: TestClient):
"""
This test should pass but fails because fastapi is preloading default and pass those values
on model creation, losing the ability to not have validation on default value.
:param fastapi_client:
:return:
"""
form_content = {}
response = fastapi_client.post("/form", data=form_content)
assert response.status_code == 200 # test will fail here and below -> will raise 422
response_model = ResponseSampleModel(**response.json())
assert len(response_model.fields_set) == 0
assert not set(form_content).symmetric_difference(set(response_model.fields_set))
################
# Section 4: Tests on Json model with fastapi
################
def test_submit_json_with_all_values(fastapi_client: TestClient):
json_content = {
"sample_field_1": False,
"sample_field_2": True,
"sample_field_3": False,
"sample_field_4": "It's a random string"
}
response = fastapi_client.post("/json", json=json_content)
assert response.status_code == 200
response_model = ResponseSampleModel(**response.json())
assert len(response_model.fields_set) == 4
assert not set(json_content).symmetric_difference(set(response_model.fields_set))
def test_submit_json_with_not_all_values(fastapi_client: TestClient):
"""
This test will pass but the same not happen with Form.
:param fastapi_client:
:return:
"""
json_content = {
"sample_field_1": False,
"sample_field_3": False,
"sample_field_4": "It's a random string"
}
response = fastapi_client.post("/json", json=json_content)
assert response.status_code == 200
response_model = ResponseSampleModel(**response.json())
assert len(response_model.fields_set) == 3 # This time will not fail
assert not set(json_content).symmetric_difference(set(response_model.fields_set))
def test_submit_json_with_no_values(fastapi_client: TestClient):
"""
This test will pass but the same not happen with Form.
:param fastapi_client:
:return:
"""
json_content = {}
response = fastapi_client.post("/json", json=json_content)
assert response.status_code == 200 # This time will not fail
response_model = ResponseSampleModel(**response.json())
assert len(response_model.fields_set) == 0
assert not set(json_content).symmetric_difference(set(response_model.fields_set))
```
### Description
This is a generalized version of the issue reported in https://github.com/fastapi/fastapi/discussions/13380 .
This issue do not affect body json data.
For models created from a form, during the parsing phase their default values are preloaded and passed to the validator to create the model.
1) This leads to a loss of information regarding which fields have been explicitly set, since default values are now considered as having been provided.
12) Consequently, validation is enforced on default values, which might not be the intended behavior and anyway different from the one from json body.
### Operating System
macOS - Linux
### Operating System Details
_No response_
### FastAPI Version
0.115.8
### Pydantic Version
2.10.6
### Python Version
Python 3.11 - Python 3.13.1 | 0easy
|
Title: [BUG] Notebooks use api=1 auth
Body: **Describe the bug**
Some demos use api=1/2 instead of api=3
**To Reproduce**
See main analyst notebook
**Expected behavior**
Should instead have something like:
```python
# To specify Graphistry account & server, use:
# graphistry.register(api=3, username='...', password='...', protocol='https', server='hub.graphistry.com')
# For more options, see https://github.com/graphistry/pygraphistry#configure
| 0easy
|
Title: Add tests that confirm primitive input_types are the expected shapes
Body: There are a number of assumptions we make about the shape of Primitive `input_types` lists:
- Its either a list of ColumnSchema objects or a list of lists of ColumnSchema objects (and not a combination)
- All sub-lists are the same length
- No `input_types` list or sublist is empty
As we may need to rely on these assumptions at some point, we should add tests that confirm these assumptions for all primitives, so that if we add a Primitive that breaks any of these assumptions in the future, we are notified. | 0easy
|
Title: ASK AI should work with open source self hosted/cloud hosted LLMs in open source pygwalker
Body: **Is your feature request related to a problem? Please describe.**
I am unable to integrate or use Amazon Bedrock, Azure GPT, Gemini or my own llama 2 LLM for the ASK AI feature.
**Describe the solution you'd like**
Either I should have the option to remove the real state used by the ASK AI bar in the open source PyGwalker or I should be able to integrate custom LLM endpoints.
| 0easy
|
Title: ClassificationScoreVisualizers should return accuracy
Body: See #358 and #213 -- classification score visualizers should return accuracy when `score()` is called. If F1 or accuracy is not in the figure it should also be included in the figure. | 0easy
|
Title: Allow multi-line queries in REPL
Body: We should allow user to write multi-line queries in the REPL, running them only after a semi-colon. | 0easy
|
Title: tools.reduce: matrix rank affecting the dimensions of the result
Body: `import hypertools as hyp`
`import numpy as np`
`print hyp.tools.reduce(np.random.normal(0,1,[5,100]),ndims=10).shape`
This is the code I tried which was supposed to give me a 5x10 dimension matrix. However, because the rank of the matrix is 5, PCA is unable to generate 10 dimensions from the data. Therefore the resulting matrix I got was a 5x5 matrix.
I talked about this issue with Professor Manning today, and he suggested a fix to this problem:
if the number of dimensions to reduce to is greater than the rank of the matrix, then pad the matrix with rows of 0s to increase the rank, do PCA then eliminate the 0 rows.
Could you look into this issue and let me know what you think? Thanks! | 0easy
|
Title: Enhance performance of visiting parsing model
Body: When Robot parses data files, it produces a model that both Robot itself and external tools use. Going through the model is easiest by using [visitors](https://robot-framework.readthedocs.io/en/stable/autodoc/robot.parsing.model.html#robot.parsing.model.visitor.ModelVisitor). That is typically pretty fast, but it is possible to enhance the performance, for example, by caching visitor methods. These enhancements are especially important for external tools such as editor plugins that need to process lot of data.
We already have PR #4911 by @d-biehl that adds visitor method caching. This issue exists mainly for tracking purposes, but it also covers mentioning caching in visitor documentation. Possible additional performance enhancements can be linked to this same issue as well. | 0easy
|
Title: [DOCS] Colors tutorial
Body: **Is your feature request related to a problem? Please describe.**
Colors can use a full tutorial, due to issues like https://github.com/graphistry/pygraphistry/issues/241
**Describe the solution you'd like**
Cover:
* Explicit colors: palettes, RGB hex
* Symbolic encodings: categorical, continuous, defaults
* Points vs edges
* Linked from main docs + pygraphistry homepage
**Describe alternatives you've considered**
Current docs (code, readme) are not enough
| 0easy
|
Title: Force json response for autodraw drawing agent
Body: Within `/gpt converse` there is an agent that creates a prompt and determines if drawing is needed when a conversation message is sent with gpt-vision, currently, when I set response_format in the request to the model, the upstream API seems to complain saying that that parameter isn't allowed.
It may be not supported by gpt-4-vision? Otherwise, we need to find a fix so we don't depend on retries to enforce JSON. | 0easy
|
Title: [tech debt] Merge `ShiftScaleRotate` and `Affine`
Body: Both do the same, but `Affine` is much faster.
1. Merge two classes.
2. Add Deprecated warning to `ShiftScaleRotate` | 0easy
|
Title: feat: implement "update" for Milvus
Body: Milvus does not directly support the functionality to update existing data.
The workaround is to delete+index data that you want to update. | 0easy
|
Title: "Installed capacity" label is shown for aggregated data
Body: **Describe the bug**
"Installed capacity" label should not be shown for aggregated data
**To Reproduce**
Steps to reproduce the behavior:
1. Pick any zone
2. Select Yearly view
**Expected behavior**
"Installed capacity" label should not be shown for aggregated data.
**Screenshots**
<img width="446" alt="image" src="https://github.com/user-attachments/assets/b0ee6e71-7205-423b-9c6a-1b2e6b85c735">
| 0easy
|
Title: High correlation warning printed multiple times
Body: I get the same warning "High correlation" with the same other column four times in the report.
Looks like a bug where the warning is accidentally generated multiple times or not de-duplicated properly.
Is it easy to spot the issue or reproduce? Or should I try to extract a standalone test case?
This is with pandas 1.3.0 and pandas-profiling 3.0.0.
<img width="572" alt="Screenshot 2021-09-05 at 18 54 44" src="https://user-images.githubusercontent.com/852409/132135015-45c0a273-763a-430e-b12f-d340e79b3ea7.png">
| 0easy
|
Title: [From flask-restplus #589] Stop using failing ujson as default serializer
Body: Flask-restx uses `ujson` for serialization by default, falling back to normal `json` if unavailable.
As has been mentioned before `ujson` is quite flawed, with some people reporting rounding errors.
Additionally, `ujson.dumps` does not take the `default` argument, which allows one to pass any custom method as the serializer. Since my project uses some custom serialization, I have been forced to fork flask-restx and just remove the ujson import line...
You could add an option to choose which lib to use, or completely remove ujson from there. I think it is bad practice to have to maintain the code for two lbraries with different APIs.
An alternative could be to make ujson an optional dependency and not list it in the package dependencies, but keep the current imports as they are. As a result, anyone wanting to use ujson can install it and it will take over transparently.
Original issue: `https://github.com/noirbizarre/flask-restplus/issues/589`
I would gladly make the PR if you are happy with one of these solutions | 0easy
|
Title: [DOC] Update Pull Request Template with `netlify` option
Body: # Brief Description of Fix
<!-- Please describe the fix in terms of a "before" and "after". In other words, what's not so good about the current docs
page, and what you would like to see it become.
Example starter wording is provided. -->
Currently, the recommended approach for reviewing documentation updates is to build the docs locally. This may still be the recommended approach, but a note should be included that lets developers know that they can preview the docs from the PR checks as well (with the new `netlify` inclusion).
I would like to propose a change, such that now the docs have examples on how to use the `netlify` doc previews from a PR.
# Relevant Context
<!-- Please put here, in bullet points, links to the relevant docs page. A few starting template points are available
to get you started. -->
- [Link to exact file to be edited](https://github.com/ericmjl/pyjanitor/blob/dev/.github/pull_request_template.md)
- Maybe we want to add a line to the `CONTRIBUTING.rst` files as well? 🤷♂️
| 0easy
|
Title: topic 5 part 1 summation sign
Body: [comment in ODS](https://opendatascience.slack.com/archives/C39147V60/p1541584422610100) | 0easy
|
Title: [BUG] AttributeError error in merge_models function after document update
Body: **Describe the bug**
An error occurs in the merge_models function after defining a new field in the document update in the .update method that is not included in the pydantic model using the extra='allow' option.
Follow the code example below and the exception
**To Reproduce**
```python
import asyncio
from beanie import Document, init_beanie
from pydantic import ConfigDict
async def main():
class DocumentTestModelWithModelConfigExtraAllow(Document):
model_config = ConfigDict(extra='allow')
await init_beanie(
connection_string='mongodb://localhost:27017/beanie',
document_models=[DocumentTestModelWithModelConfigExtraAllow],
)
doc = DocumentTestModelWithModelConfigExtraAllow()
await doc.insert()
await doc.update({"$set": {"my_extra_field": 12345}})
assert doc.my_extra_field == 12345
if __name__ == '__main__':
asyncio.run(main())
```
**Exception**
```
Traceback (most recent call last):
File "/home/myuser/Documents/.gits/myproject/test.py", line 402, in <module>
asyncio.run(main())
File "/home/myuser/.pyenv/versions/3.11.10/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/home/myuser/.pyenv/versions/3.11.10/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/myuser/.pyenv/versions/3.11.10/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/home/myuser/Documents/.gits/myproject/test.py", line 397, in main
await doc.update({"$set": {"my_extra_field": 12345}})
File "/home/myuser/.cache/pypoetry/virtualenvs/myproject-B1nOShX4-py3.11/lib/python3.11/site-packages/beanie/odm/actions.py", line 239, in wrapper
result = await f(
^^^^^^^^
File "/home/myuser/.cache/pypoetry/virtualenvs/myproject-B1nOShX4-py3.11/lib/python3.11/site-packages/beanie/odm/utils/state.py", line 85, in wrapper
result = await f(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/myuser/.cache/pypoetry/virtualenvs/myproject-B1nOShX4-py3.11/lib/python3.11/site-packages/beanie/odm/documents.py", line 740, in update
merge_models(self, result)
File "/home/myuser/.cache/pypoetry/virtualenvs/myproject-B1nOShX4-py3.11/lib/python3.11/site-packages/beanie/odm/utils/parsing.py", line 27, in merge_models
left_value = getattr(left, k)
^^^^^^^^^^^^^^^^
File "/home/myuser/.cache/pypoetry/virtualenvs/myproject-B1nOShX4-py3.11/lib/python3.11/site-packages/pydantic/main.py", line 856, in __getattr__
raise AttributeError(f'{type(self).__name__!r} object has no attribute {item!r}')
AttributeError: 'DocumentTestModelWithModelConfigExtraAllow' object has no attribute 'my_extra_field'
```
| 0easy
|
Title: [Feature Request] Add per image and per channel to Normalize
Body: Normalization transform could be expanded to the:
- per image
- per image / per channel
- min_max
- min_max / per channel
And it makes sense to add to the docstring typical values for mean and std for standard and inception. Probably as an `example` section.
Basically to follow:
https://github.com/ChristofHenkel/kaggle-landmark-2021-1st-place/blob/main/data/ch_ds_1.py#L114-L153 | 0easy
|
Title: [UX] Showing logs for automatic translation of file uploads in `jobs launch`
Body: <!-- Describe the bug report / feature request here -->
We have the logs showing for the normal `sky launch`'s file_mounts, but we don't show the logs for `jobs launch`'s automatic translation. We should have a log file for that. (requested by a user)
<!-- If relevant, fill in versioning info to help us troubleshoot -->
_Version & Commit info:_
* `sky -v`: PLEASE_FILL_IN
* `sky -c`: PLEASE_FILL_IN
| 0easy
|
Title: [DOCS] Getting Started with Taipy Page --Run Command
Body: ### Issue Description
It would be nice if Getting Started Page has a line or two for command to start the Taipy application so that it would be easy for new Developers to get started right away. Right now the code ends with the below line and one has to navigate to https://docs.taipy.io/en/latest/manuals/cli/run/
if __name__ == "__main__":
Gui(page=page).run(title="Dynamic chart")
### Screenshots or Examples (if applicable)

### Proposed Solution (optional)
Nice to have somthing like this in the Getting Started Page (https://docs.taipy.io/en/latest/getting_started/)

### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional) | 0easy
|
Title: [bug] In Python 3.4 and below inspect.stack() returns just a tuple in the list, not a list of named tuples
Body: **Describe the bug**
In Python 3.4 and below `inspect.stack()` returns just a tuple in the list, not a list of named tuples. Only since version 3.5 does it return a list of named tuples.
**To Reproduce**
Steps to reproduce the behavior:
1. Having the following folder structure
<!-- Describe or use the command `$ tree -v` and paste below -->
<details>
<summary> Project structure </summary>
```bash
# example.py
# settings.ini
```
</details>
2. Having the following config files:
<!-- Please adjust if you are using different files and formats! -->
<details>
<summary> Config files </summary>
**settings.ini**
```ini
[default]
path = 'example.txt'
```
</details>
3. Having the following app code:
<details>
<summary> Code </summary>
**example.py**
```python
from dynaconf import settings
print(settings.PATH)
```
</details>
4. Executing under the following environment
<details>
<summary> Execution </summary>
```bash
$ python example.py
Traceback (most recent call last):
File "example.py", line 3, in <module>
print(settings.PATH)
File "C:\Python\Python34\lib\site-packages\dynaconf\base.py", line 93, in __getattr__
self._setup()
File "C:\Python\Python34\lib\site-packages\dynaconf\base.py", line 118, in _setup
default_settings.reload()
File "C:\Python\Python34\lib\site-packages\dynaconf\default_settings.py", line 74, in reload
start_dotenv(*args, **kwargs)
File "C:\Python\Python34\lib\site-packages\dynaconf\default_settings.py", line 61, in start_dotenv
or _find_file(".env", project_root=root_path)
File "C:\Python\Python34\lib\site-packages\dynaconf\utils\files.py", line 62, in find_file
script_dir = os.path.dirname(os.path.abspath(inspect.stack()[-1].filename))
AttributeError: 'tuple' object has no attribute 'filename'
```
</details>
**Expected behavior**
```python
from dynaconf import settings
print(settings.PATH)
# 'example.txt'
```
**Debug output**
<details>
<summary> Debug Output </summary>
```bash
export `DEBUG_LEVEL_FOR_DYNACONF=DEBUG` reproduce your problem and paste the output here
2019-08-26:16:46:23,988 DEBUG [default_settings.py:55 - start_dotenv] Starting Dynaconf Dotenv Base
2019-08-26:16:46:23,989 DEBUG [files.py:57 - find_file] No root_path for .env
Traceback (most recent call last):
File "example.py", line 3, in <module>
print(settings.PATH)
File "C:\Python\Python34\lib\site-packages\dynaconf\base.py", line 93, in __getattr__
self._setup()
File "C:\Python\Python34\lib\site-packages\dynaconf\base.py", line 118, in _setup
default_settings.reload()
File "C:\Python\Python34\lib\site-packages\dynaconf\default_settings.py", line 74, in reload
start_dotenv(*args, **kwargs)
File "C:\Python\Python34\lib\site-packages\dynaconf\default_settings.py", line 61, in start_dotenv
or _find_file(".env", project_root=root_path)
File "C:\Python\Python34\lib\site-packages\dynaconf\utils\files.py", line 62, in find_file
script_dir = os.path.dirname(os.path.abspath(inspect.stack()[-1].filename))
AttributeError: 'tuple' object has no attribute 'filename'
```
</details>
**Environment (please complete the following information):**
- OS: Windows 8.1
- Dynaconf Version 2.0.4
- Python Version 3.4.3
| 0easy
|
Title: Potential Memory Leak in Response Buffer
Body: ## 🐛 Bug
The [response_buffer](https://github.com/Lightning-AI/LitServe/blob/08a9caa4360aeef94ee585fc5e88f721550d267b/src/litserve/server.py#L74) dictionary could grow indefinitely if requests fail before their responses are processed.
Implement a cleanup mechanism or timeout for orphaned entries
#### Code sample
<!-- Ideally attach a minimal code sample to reproduce the decried issue.
Minimal means having the shortest code but still preserving the bug. -->
### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
### Environment
If you published a Studio with your bug report, we can automatically get this information. Otherwise, please describe:
- PyTorch/Jax/Tensorflow Version (e.g., 1.0):
- OS (e.g., Linux):
- How you installed PyTorch (`conda`, `pip`, source):
- Build command you used (if compiling from source):
- Python version:
- CUDA/cuDNN version:
- GPU models and configuration:
- Any other relevant information:
### Additional context
<!-- Add any other context about the problem here. -->
| 0easy
|
Title: X has feature names, but StandardScaler was fitted without feature names
Body: This issue has been coming up when I use,
automl.predict_proba(input)
I am using the requirements.txt in venv. Shouldn't input have feature names?
This message did not used to come up and I don't know why. | 0easy
|
Title: Placeholder for title becomes editable when the user double clicks.
Body: ## Bug Report
**Problematic behavior**
Here is a [video](https://www.loom.com/share/22c41836a9254727b0d024d761b2011f) of the problem. | 0easy
|
Title: Clean up shims for old python version
Body: For example there is code using [six](https://six.readthedocs.io/) to handle 2/3 differences, but there is code that requires 3.5+ (e.g. type annotations syntax) and docs say 3.6+. As Python2 went EOL earlier this year, it's probably good to clean up the old code and dependencies. | 0easy
|
Title: Return to the first page after visiting invalid page
Body: ### Actions before raising this issue
- [x] I searched the existing issues and did not find anything similar.
- [x] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Is your feature request related to a problem? Please describe.
When visiting a non-existent page for some resource in UI, you're shown a error notification

It would be nice to redirect the user automatically to the first page or to provide a button to do so. There are other buttons on the page (Tasks, Jobs etc.), but if there was some filter enabled, visiting these pages will clear the filter.
### Describe the solution you'd like
_No response_
### Describe alternatives you've considered
_No response_
### Additional context
_No response_ | 0easy
|
Title: CI improvements and speedups
Body: There are a few things that might be improved / changed and everyone is welcome to implement these ideas:
1. Get rid of upload/download artifact and use a registry server. This would mean much less uploads (most layers will be already uploaded). It might be a good idea to use [service containers](https://docs.github.com/en/actions/using-containerized-services/about-service-containers) here.
2. Execute several steps inside one runner. For example, `docker-stacks-foundation`, `base-notebook` and `minimal-notebook` can be built together, and while one image is uploading as artefact, another can already start building. This doesn't look very clean though, as it will introduce complexity.
3. Use [Autoscaling with self-hosted runners](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners/autoscaling-with-self-hosted-runners). This will allow to create aarch64 runners automatically and only when needed. It will save resources and my money 😆
To be honest, I'm pretty happy with the way CI works currently, so I don't think these are essential to implement, but I might give these ideas a try. | 0easy
|
Title: [BUG] Failed to execute query when there are multiple arguments
Body: **Describe the bug**
When executing query with more than three joint arguments, the query failed with SyntaxError.
**To Reproduce**
```python
import numpy as np
import mars.dataframe as md
df = md.DataFrame({'a': np.random.rand(100),
'b': np.random.rand(100),
'c c': np.random.rand(100)})
df.query('a < 0.5 and a != 0.1 and b != 0.2').execute()
```
```
Traceback (most recent call last):
File "/Users/wenjun.swj/Code/mars/mars/dataframe/base/eval.py", line 131, in visit
visitor = getattr(self, method)
AttributeError: 'CollectionVisitor' object has no attribute 'visit_Series'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/wenjun.swj/miniconda3/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3417, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-5-f0b7eeac5829>", line 1, in <module>
df.query('a < 0.5 and a != 0.1 and b != 0.2').execute()
File "/Users/wenjun.swj/Code/mars/mars/dataframe/base/eval.py", line 773, in df_query
predicate = mars_eval(expr, resolvers=(df,), level=level + 1, **kwargs)
File "/Users/wenjun.swj/Code/mars/mars/dataframe/base/eval.py", line 507, in mars_eval
result = visitor.eval(expr)
File "/Users/wenjun.swj/Code/mars/mars/dataframe/base/eval.py", line 112, in eval
return self.visit(node)
File "/Users/wenjun.swj/Code/mars/mars/dataframe/base/eval.py", line 134, in visit
return visitor(node)
File "/Users/wenjun.swj/Code/mars/mars/dataframe/base/eval.py", line 141, in visit_Module
result = self.visit(expr)
File "/Users/wenjun.swj/Code/mars/mars/dataframe/base/eval.py", line 134, in visit
return visitor(node)
File "/Users/wenjun.swj/Code/mars/mars/dataframe/base/eval.py", line 145, in visit_Expr
return self.visit(node.value)
File "/Users/wenjun.swj/Code/mars/mars/dataframe/base/eval.py", line 134, in visit
return visitor(node)
File "/Users/wenjun.swj/Code/mars/mars/dataframe/base/eval.py", line 178, in visit_BoolOp
return reduce(func, node.values)
File "/Users/wenjun.swj/Code/mars/mars/dataframe/base/eval.py", line 177, in func
return self.visit(binop)
File "/Users/wenjun.swj/Code/mars/mars/dataframe/base/eval.py", line 134, in visit
return visitor(node)
File "/Users/wenjun.swj/Code/mars/mars/dataframe/base/eval.py", line 148, in visit_BinOp
left = self.visit(node.left)
File "/Users/wenjun.swj/Code/mars/mars/dataframe/base/eval.py", line 133, in visit
raise SyntaxError('Query string contains unsupported syntax: {}'.format(node_name))
SyntaxError: Query string contains unsupported syntax: Series
``` | 0easy
|
Title: Add XML Parsing block
Body: Use [https://github.com/Significant-Gravitas/gravitasml](https://github.com/Significant-Gravitas/gravitasml) | 0easy
|
Title: Support constraints.cat and CatTransform
Body: Hello!
I have a custom multi-dimensional distribution where the support may be truncated along some dimensions. In terms of constraints, some dimensions will either be `real`, `greater_than`, `less_than`, or `interval`. I naively was then implementing the `support` as, e.g.:
```python
ivl = constraints.interval([0., -jnp.inf, 5.], [jnp.inf, 0., 10.])
```
Right now, this is not really supported by the `numpyro.distributions.constraints.Interval` class because of how [`feasible_like()`](https://github.com/pyro-ppl/numpyro/blob/master/numpyro/distributions/constraints.py#L514C5-L517C10) works, or how the `scale` is computed in the [unconstrained transform](https://github.com/pyro-ppl/numpyro/blob/master/numpyro/distributions/transforms.py#L1604). Would you be open to making these things inf-safe? So far I instead implemented a custom subclass `InfSafeInterval(constraints._Interval)` to support this, but thought I would check in on this. Thanks! | 0easy
|
Title: Scrapy does not decode base64 MD5 checksum from GCS
Body: <!--
Thanks for taking an interest in Scrapy!
If you have a question that starts with "How to...", please see the Scrapy Community page: https://scrapy.org/community/.
The GitHub issue tracker's purpose is to deal with bug reports and feature requests for the project itself.
Keep in mind that by filing an issue, you are expected to comply with Scrapy's Code of Conduct, including treating everyone with respect: https://github.com/scrapy/scrapy/blob/master/CODE_OF_CONDUCT.md
The following is a suggested template to structure your issue, you can find more guidelines at https://doc.scrapy.org/en/latest/contributing.html#reporting-bugs
-->
### Description
Incorrect GCS Checksum processing
### Steps to Reproduce
1. Obtain the checksum for an up-to-date file.
**Expected behavior:** [What you expect to happen]
matches the checksum of the file downloaded
**Actual behavior:** [What actually happens]
NOT matches the checksum of the file downloaded
**Reproduces how often:** [What percentage of the time does it reproduce?]
Always
### Versions
current
### Additional context
https://cloud.google.com/storage/docs/json_api/v1/objects
> MD5 hash of the data, encoded using [base64](https://datatracker.ietf.org/doc/html/rfc4648#section-4).
But, Scrapy dose not decode MD5 from GCS.
| 0easy
|
Title: More than 1 `input_shape` when initializing `flax_module`
Body: Some modules require more than 1 input when initializing, which can be passed through `kwargs`. But this doesn't work in some cases. For example:
```python
class RNN(nn.Module):
@functools.partial(
nn.transforms.scan,
variable_broadcast='params',
split_rngs={'params': False})
@nn.compact
def __call__(self, state, x):
return RNNCell()(state, x)
```
I tried to declare this with the following statement:
```python
rnn = flax_module(
'rnn',
RNN(),
input_shape=(num_hiddens,),
x=jnp.ones((10, 10))
)
```
But I can't use kwargs because `nn.transforms.scan` does not support them:
```
RuntimeWarning: kwargs are not supported in scan, so "x" is(are) ignored
```
I worked around this by wrapping my `RNN` with another class, after which I could pass `x` as a kwarg. However, I think `input_shape` should allow passing dimensions for more than one input.
https://github.com/pyro-ppl/numpyro/blob/0bff074a4a54a593a7fab7e68b5c10f85dd332a6/numpyro/contrib/module.py#L83 | 0easy
|
Title: Improve error message when failing to initialize Metaproduct
Body: Tasks may generate more than one product like this:
```python
from ploomber.products import File
from ploomber.tasks import PythonCallable
from ploomber import DAG
def _do_stuff():
pass
# note we are calling FIle
PythonCallable(_do_stuff, {'a': File('something')}, dag=DAG())
```
But if the user forgets that:
```python
# forgot to call File!
PythonCallable(_do_stuff, {'a': 'something'}, dag=DAG())
```
We get this error:
```pytb
~/dev/ploomber/src/ploomber/tasks/tasks.py in __init__(self, source, product, dag, name, params, unserializer, serializer)
100 self._source = type(self)._init_source(source, kwargs)
101 self._unserializer = unserializer or dag.unserializer
--> 102 super().__init__(product, dag, name, params)
103
104 @staticmethod
~/dev/ploomber/src/ploomber/tasks/abc.py in __init__(self, product, dag, name, params)
192 type(self).__name__))
193
--> 194 self.product.task = self
195 self._client = None
196
~/dev/ploomber/src/ploomber/products/metaproduct.py in task(self, value)
113 def task(self, value):
114 for p in self.products:
--> 115 p.task = value
116
117 def exists(self):
AttributeError: 'str' object has no attribute 'task'
```
Better: check if `p` has a task attribute, if it doesn't, raise a better error like "doesn't look like a Product instance, got object of type {type}"
| 0easy
|
Title: Ignore empty log messages (including messages cleared by listeners)
Body: Listeners using the v3 API can modify the logged message, but they cannot remove it altogether. Even if they set the message to an empty string, that empty string is still logged. This adds unnecessary noise to the log file and increases the size of output.xml.
This is easiest to implement by ignoring log messages having an empty string as the value. Not writing them to output.xml is easy and that automatically takes care of removing them from the log file as well. They should not be included in the result model created during execution either, but I actually believe that model doesn't get log messages at all at the moment.
---
**UPDATE:** This change will obviously mean that all empty messages are ignored. Updated the title accordingly. | 0easy
|
Title: Improve RectangularROI error message
Body: When initializing a `RectangularROI` using bad values for the `top`, `bottom`, `right` or `left` parameters, the error message is not very helpful. For example, when `top` and `bottom` have the same value:
```python
import hyperspy.api as hs
roi = hs.roi.RectangularROI(left=1.0, right=1.5, top=2.0, bottom=2.0)
```
The error message is very long (not shown here), and not very intuitive:
```python
hyperspy/hyperspy/roi.py in _bottom_changed(self, old, new)
923 def _bottom_changed(self, old, new):
924 if self._bounds_check and \
--> 925 self.top is not t.Undefined and new <= self.top:
926 self.bottom = old
927 else:
TypeError: '<=' not supported between instances of '_Undefined' and 'float'
```
There should be a check early in the initialization, to see if the `top`, `bottom`, `right` and `left` values are good. If they aren't, a more intuitive error message should be shown. | 0easy
|
Title: loosen the docs requirements
Body: In [docs/requirements.txt](https://github.com/cleanlab/cleanlab/blob/master/docs/requirements.txt), we have many packages pinned to only a specific version.
Goal: figure out as many of these packages where we can replace the `==` with `>=` to ensure we're running the docs build with the latest versions in our CI.
Why? Because the docs build auto-runs all of our [tutorial notebooks](https://github.com/cleanlab/cleanlab/tree/master/docs/source/tutorials) and we'd like to automatically make sure the tutorials run with the latest versions of the dependency packages (and be aware when a dependency has released a new version that breaks things).
If working on this, you can feel free to only focus on a specific dependency and test the docs build if you replace `==` with `>=` for that one dependency. | 0easy
|
Title: Upgrade to `uv` instead of `pdm`
Body: See references: https://github.com/astral-sh/uv
Also: https://github.com/pydantic/logfire/pull/480 | 0easy
|
Title: Libyears metric API
Body: The canonical definition is here: https://chaoss.community/?p=3976 | 0easy
|
Title: Fix SAFE_MODE
Body: SAFE_MODE isn't working from config.ini apparently at the moment. | 0easy
|
Title: Inconsistent item assignment exception for `st.secrets`
Body: ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [X] I added a very descriptive title to this issue.
- [X] I have provided sufficient information below to help reproduce this issue.
### Summary
`st.secrets` is read-only. When assigning items via key/dict notation (`st.secrets["foo"] = "bar"`), it properly shows an exception:

But when assigning an item via dot notation (`st.secrets.foo = "bar"`), it simply fails silently, i.e. it doesn't show an exception but it also doesn't set the item. I think in this situation it should also show an exception.
### Reproducible Code Example
[](https://issues.streamlitapp.com/?issue=gh-10107)
```Python
import streamlit as st
st.secrets.foo = "bar"
```
### Steps To Reproduce
_No response_
### Expected Behavior
Show same exception message as for `st.secrets["foo"] = "bar"`.
### Current Behavior
Nothing.
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.41.0
- Python version:
- Operating System:
- Browser:
### Additional Information
_No response_ | 0easy
|
Title: [RFC] Move to f"string"
Body: Python 3.5 has been dropped.
Now some uses of `format` can be replaced with fstrings | 0easy
|
Title: Detect missing pydantic dependency (e.g. pydantic-settings or extra types)
Body: Hey team,
I wasn't sure if you would be comfortable with this, but what's your opinion on telling the user when they are missing a pydantic dependency, such as `pydantic-settings` is using `BaseSettings` in Pydantic V1?
This could spit out a warning (whether that's stdout, stderr, log.txt, or somewhere else) or a `# TODO` comment if the `pydantic-settings` package isn't installed.
This will help users more quickly identify when their application requires packages that are not currently installed.
To detect if a package is installed without importing it (for safety reasons), `importlib.util` can be used:
```
from importlib.util import find_spec
print(find_spec("pydantic-settings"))
# ''
print(find_spec("pydantic"))
# 'ModuleSpec(name='pydantic', loader=<_frozen_importlib_external.SourceFileLoader object at 0x104e18240>, origin='/Users/kkirsche/.asdf/installs/python/3.11.4/lib/python3.11/site-packages/pydantic/__init__.py', submodule_search_locations=['/Users/kkirsche/.asdf/installs/python/3.11.4/lib/python3.11/site-packages/pydantic'])'
```
Documentation on `find_spec`:
https://docs.python.org/3/library/importlib.html#importlib.util.find_spec | 0easy
|
Title: Ability to resume after an error occurs
Body: I didn't see a way to do this, apologies if it is described somewhere. I am attempting to build an application and I noticed any time it exits due to an error the process stops and then I'm stuck. So for example:
```
...
npm install
To run the codebase:
npm run dev
Do you want to execute this code?
npm install
npm run dev
If yes, press enter. Otherwise, type "no"
Executing the code...
npm WARN deprecated @types/googlemaps@3.43.3: Types for the Google Maps browser API have moved to @types/google.maps. Note: these types are not for the googlemaps npm package, which is a Node API.
npm WARN deprecated @material-ui/styles@4.11.5: Material UI v4 doesn't receive active development since September 2021. See the guide https://mui.com/material-ui/migration/migration-v4/ to upgrade to v5.
npm WARN deprecated @material-ui/core@4.12.4: Material UI v4 doesn't receive active development since September 2021. See the guide https://mui.com/material-ui/migration/migration-v4/ to upgrade to v5.
added 39 packages, removed 347 packages, and audited 354 packages in 5s
44 packages are looking for funding
run `npm fund` for details
5 critical severity vulnerabilities
To address all issues (including breaking changes), run:
npm audit fix --force
Run `npm audit` for details.
> scavenger@1.0.0 dev
> next dev
ready - started server on 0.0.0.0:3000, url: http://localhost:3000
info - Using webpack 5. Reason: Enabled by default https://nextjs.org/docs/messages/webpack5
It looks like you're trying to use TypeScript but do not have the required package(s) installed.
Please install @types/react by running:
yarn add --dev @types/react
If you are not trying to use TypeScript, please remove the tsconfig.json file from your package root (and any TypeScript files in your pages directory).
(venv) ➜ gpt-engineer git:(main) ✗
```
Notice the typescript error above, and the process ends and I'm back at the command prompt. Is it possible to resume after adding the missing package with `yarn add --dev @types/react`? If not, it'd be a great feature. | 0easy
|
Title: Replace Twitter links with Mastodon links
Body: <!--
Summarise the documentation change you’re suggesting in the Issue title.
-->
### Details
<!--
Provide a clear and concise description of what you want to happen.
-->
Wagtail has decided to move away from Twitter/X to Mastodon (specifically Fostodon):
- https://x.com/WagtailCMS/status/1834237886285103507
- https://fosstodon.org/@wagtail
<!--
If you're suggesting a very specific change to the documentation, feel free to directly submit a pull request.
-->
### Working on this
<!--
Do you have thoughts on skills needed?
Are you keen to work on this yourself once the issue has been accepted?
Please let us know here.
-->
For the files we need to change, refer to past PRs that changed `twitter.com` to `x.com`, e.g.
- #12205
- #12234
There may be some others I have missed.
Anyone can contribute to this. View our [contributing guidelines](https://docs.wagtail.org/en/latest/contributing/index.html), add a comment to the issue once you’re ready to start.
| 0easy
|
Title: Missing type information when dependencies are specified using AsyncIterator
Body: ## Describe the bug
The type information for classes such as AuthenticationBackend causes mypy errors when using a dependency whose return type is AsyncIterator rather than AsyncGenerator.
## To Reproduce
Consider a dependency:
```python
async def get_redis_strategy(
config: t.Annotated[Config, config_dependency],
redis: t.Annotated[Redis, redis_dependency], # type: ignore[type-arg]
) -> t.AsyncGenerator[RedisStrategy[User, UUID], None]:
yield RedisStrategy(redis, lifetime_seconds=config.SESSION_EXPIRY_SECONDS)
```
Which could also be written more succinctly as:
```python
async def get_redis_strategy(
config: t.Annotated[Config, config_dependency],
redis: t.Annotated[Redis, redis_dependency], # type: ignore[type-arg]
) -> t.AsyncIterator[RedisStrategy[User, UUID]]:
yield RedisStrategy(redis, lifetime_seconds=config.SESSION_EXPIRY_SECONDS)
```
In the latter case mypy reports a type issue, as DependencyCallable defined in fastapi_users/types.py does not allow for AsyncIterator (even though it's semantically the same in this case).
## Expected behavior
Using a dependency that returns an AsyncIterator as the `get_strategy` argument to `AuthenticationBackend` should not cause a mypy error.
## Configuration
- Python version : 3.11.3
- FastAPI version : 0.95.2
- FastAPI Users version : 11.0.0 | 0easy
|
Title: Support `base_url` when initializing sessions
Body:
**Describe alternatives you've considered**]
https://www.python-httpx.org/advanced/#other-client-only-configuration-options
| 0easy
|
Title: word2vec doc-comment example of KeyedVectors usage broken
Body: The usage example in the word2vec.py doc-comment regarding `KeyedVectors` uses inconsistent paths and thus doesn't work.
https://github.com/RaRe-Technologies/gensim/blob/e859c11f6f57bf3c883a718a9ab7067ac0c2d4cf/gensim/models/word2vec.py#L73
https://github.com/RaRe-Technologies/gensim/blob/e859c11f6f57bf3c883a718a9ab7067ac0c2d4cf/gensim/models/word2vec.py#L76
If vectors were saved to a tmpfile-path based on the filename `'wordvectors.kv'`, they need to loaded from that same path, not some other local-directory file named 'model.wv'.
(Also, in my opinion the use of `get_tmpfile()` adds unnecessary extra complexity to this example. People usually **don't** want their models in a "temp" directory, which some systems will occasionally delete, so the examples might as well do the simplest possible thing: store in the current working directory with simple string filenames. The example code above this is also confused, because it creates a temp-file path, but then doesn't actually use it, choosing to do the simple & right thing with a local file instead.) | 0easy
|
Title: Interactive ROI plot does not update limits
Body: Running the following code:
```python
import numpy as np
import hyperspy.api as hs
s = hs.signals.Signal2D(np.random.random((500, 500)))
line_roi = hs.roi.Line2DROI()
s.plot()
s_line = line_roi.interactive(s, color='red')
s_line.plot()
```
Gives the plot:

Then, changing the length of the red ROI line, does update the data in `s_line` plot, but it does not update the x-limits.
Example of this:


| 0easy
|
Title: Update the hugging face space header
Body: - [x] I have searched to see if a similar issue already exists.
We need to update the Huggingface space header version to latest to pull in some bug fixes. | 0easy
|
Title: Add check to forbid auto migrations
Body: More context: https://adamj.eu/tech/2020/02/24/how-to-disallow-auto-named-django-migrations/
Discussion: https://twitter.com/AdamChainz/status/1231895529686208512 | 0easy
|
Title: Add an option to remove line numbers in gr.Code
Body: - [X ] I have searched to see if a similar issue already exists.
**Is your feature request related to a problem? Please describe.**
`gr.Code()` always displays line numbers.
**Describe the solution you'd like**
I propose to add an option `show_line_numbers = True | False` to display or hide the line numbers. The default should be `True` for compatibility with the current behaviour.
| 0easy
|
Title: Proposal: sort draw.ellipse coordinates by default to make sure they are all contiguous
Body: ### Description:
Someone just upvoted this old SO answer of mine:
https://stackoverflow.com/questions/62339802/skimage-draw-ellipse-generates-two-undesired-lines
Basically, using plt.plot or any other line drawing software to draw an ellipse from our ellipse coordinates fails because of how the coordinates are sorted (or rather, not sorted):

The solution is to sort the coordinates based on the angle around the centroid. It's easy/fast enough and would make downstream work simpler, plus I think it's what most users would expect. | 0easy
|
Title: Bug: `exceptiongroup` backport is missing on Python 3.10
Body: ### Description
Code expects `exceptiongroup` backport to be installed in https://github.com/litestar-org/litestar/blob/6e4e530445eadbc1fd2f65bebca3bc68cf12f29a/litestar/utils/helpers.py#L101
However, it's only declared for _dev_ dependencies in https://github.com/litestar-org/litestar/blob/6e4e530445eadbc1fd2f65bebca3bc68cf12f29a/pyproject.toml#L109 so after `litestar` install it won't be found and one has to manually require it now.
Running Python 3.10.
### Logs
Full stacktrace (failure on launch):
```
Traceback (most recent call last):
File "/home/api/.local/lib/python3.10/site-packages/litestar/utils/helpers.py", line 99, in get_exception_group
return cast("type[BaseException]", ExceptionGroup) # type:ignore[name-defined]
NameError: name 'ExceptionGroup' is not defined. Did you mean: '_ExceptionGroup'?
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/local/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/myproject/api/myproject/cloud_management/new_app.py", line 4, in <module>
from litestar import Litestar
File "/home/api/.local/lib/python3.10/site-packages/litestar/__init__.py", line 1, in <module>
from litestar.app import Litestar
File "/home/api/.local/lib/python3.10/site-packages/litestar/app.py", line 20, in <module>
from litestar._openapi.plugin import OpenAPIPlugin
File "/home/api/.local/lib/python3.10/site-packages/litestar/_openapi/plugin.py", line 10, in <module>
from litestar.routes import HTTPRoute
File "/home/api/.local/lib/python3.10/site-packages/litestar/routes/__init__.py", line 1, in <module>
from .asgi import ASGIRoute
File "/home/api/.local/lib/python3.10/site-packages/litestar/routes/asgi.py", line 7, in <module>
from litestar.routes.base import BaseRoute
File "/home/api/.local/lib/python3.10/site-packages/litestar/routes/base.py", line 13, in <module>
from litestar._kwargs import KwargsModel
File "/home/api/.local/lib/python3.10/site-packages/litestar/_kwargs/__init__.py", line 1, in <module>
from .kwargs_model import KwargsModel
File "/home/api/.local/lib/python3.10/site-packages/litestar/_kwargs/kwargs_model.py", line 49, in <module>
_ExceptionGroup = get_exception_group()
File "/home/api/.local/lib/python3.10/site-packages/litestar/utils/helpers.py", line 101, in get_exception_group
from exceptiongroup import ExceptionGroup as _ExceptionGroup # pyright: ignore
ModuleNotFoundError: No module named 'exceptiongroup'
```
### Litestar Version
2.5.1
### Platform
- [X] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | 0easy
|
Title: Redis connector
Body: ### 🚀 The feature
Allow pandasai to connect to Redis and perform search over databases.
### Motivation, pitch
Currently there are SQL connectors for pandasai, but Redis is also a popular database.
Recently Redisearch module added vector storage and search capability, which I think is a valuable addon for pandasai for searching within databases containing multiple long documents.
### Alternatives
_No response_
### Additional context
Redisearch resources:
https://github.com/RediSearch/RediSearch | 0easy
|
Title: Update docs include syntax for source examples
Body: ### Privileged issue
- [X] I'm @tiangolo or he asked me directly to create an issue here.
### Issue Content
This is a good first contribution. :nerd_face:
The code examples shown in the docs are actual Python files. They are even tested in CI, that's why you can always copy paste an example and it will always work, the example is tested.
The way those examples are included in the docs used a specific format. But now there's a new format available that is much simpler and easier to use than the previous one, in particular in complex cases, for example when there are examples in multiple versions of Python.
But not all the docs have the new format yet. The docs should use the new format to include examples. That is the task. :nerd_face:
**It should be done as one PR per page updated.**
## Simple Example
Before, the format was like:
````markdown
```Python hl_lines="3"
{!../../docs_src/first_steps/tutorial001.py!}
```
````
Now the new format looks like:
````markdown
{* ../../docs_src/first_steps/tutorial001.py hl[3] *}
````
* Instead of `{!` and `!}` it uses `{*` and `*}`
* It no longer has a line above with:
````markdown
```Python
````
* And it no longer has a line below with:
````markdown
```
````
* The highlight is no longer a line with e.g. `hl_lines="3"` (to highlight line 3), but instead in the same line there's a `hl[3]`.
An example PR: https://github.com/fastapi/fastapi/pull/12552
## Multiple Python Versions
There are some cases where there are variants of the same example for multiple versions of Python, or for using `Annotated` or not.
In those cases, the current include examples have syntax for tabs, and notes saying `Annotated` should be preferred. For example:
````markdown
//// tab | Python 3.9+
```Python hl_lines="4 8 12"
{!> ../../docs_src/security/tutorial006_an_py39.py!}
```
////
//// tab | Python 3.8+
```Python hl_lines="2 7 11"
{!> ../../docs_src/security/tutorial006_an.py!}
```
////
//// tab | Python 3.8+ non-Annotated
/// tip
Prefer to use the `Annotated` version if possible.
///
```Python hl_lines="2 6 10"
{!> ../../docs_src/security/tutorial006.py!}
```
////
````
In these cases, it should be updated to only include the first one (the others will be included automatically :sunglasses: ):
````markdown
{* ../../docs_src/security/tutorial006_an_py39.py hl[4,8,12] *}
````
* The syntax for tabs is also removed, all the other variants are included automatically.
* The highlight lines are included for that same first file, the fragment with `hl_lines="4 8 12"` is replaced with `hl[4,8,12]`
An example PR: https://github.com/fastapi/fastapi/pull/12553
## Highlight Lines
### Simple Lines
When there's a fragment like:
````markdown
hl_lines="4 8 12"
````
That means it is highlighting the lines 4, 8, and 12.
The new syntax is on the same include line:
````markdown
hl[4,8,12]
````
* It separates individual lines by commas.
* It uses `hl`, with square brackets around.
### Line Ranges
When there are line ranges, like:
````markdown
hl_lines="4-6"
````
That means it is highlighting lines from 4 to 6 (so, 4, 5, and 6).
The new syntax uses `:` instead of `-` for the ranges:
````markdown
hl[4:6]
````
### Multiple Highlights
There are some highlights that include individual lines and also line ranges, for example the old syntax was:
````markdown
hl_lines="2 4-6 8-11 13"
````
That means it is highlighting:
* Line 2
* Lines from 4 to 6 (so, 4, 5, and 6)
* Lines from 8 to 11 (so, 8, 9, 10, and 11)
* Line 13
The new syntax separates by commas instead of spaces:
````markdown
hl[2,4:6,8:11,13]
````
## Include Specific Lines
In some cases, there are specific lines included instead of the entire file.
For example, the old syntax was:
````markdown
```Python hl_lines="7"
{!> ../../docs_src/separate_openapi_schemas/tutorial001_py310.py[ln:1-7]!}
# Code below omitted 👇
```
````
In this example, the lines included are from line 1 to line 7 (lines 1, 2, 3, 4, 5, 6, 7). In the old syntax, it's defined with the fragment:
````markdown
[ln:1-7]
````
In the new syntax, the included code from above would be:
````markdown
{* ../../docs_src/separate_openapi_schemas/tutorial001_py310.py ln[1:7] hl[7] *}
````
* The lines to include that were defined with the fragment `[ln:1-7]`, are now defined with `ln[1:7]`
The new syntax `ln` as in `ln[1:7]` also supports multiple lines and ranges to include.
### Comments Between Line Ranges
In the old syntax, when there are ranges of code included, there are comments like:
````markdown
# Code below omitted 👇
````
The new syntax generates those comments automatically based on the line ranges.
### Real Example
A more real example of the include with the old syntax looked like this:
````markdown
//// tab | Python 3.10+
```Python hl_lines="7"
{!> ../../docs_src/separate_openapi_schemas/tutorial001_py310.py[ln:1-7]!}
# Code below omitted 👇
```
<details>
<summary>👀 Full file preview</summary>
```Python
{!> ../../docs_src/separate_openapi_schemas/tutorial001_py310.py!}
```
</details>
////
//// tab | Python 3.9+
```Python hl_lines="9"
{!> ../../docs_src/separate_openapi_schemas/tutorial001_py39.py[ln:1-9]!}
# Code below omitted 👇
```
<details>
<summary>👀 Full file preview</summary>
```Python
{!> ../../docs_src/separate_openapi_schemas/tutorial001_py39.py!}
```
</details>
////
//// tab | Python 3.8+
```Python hl_lines="9"
{!> ../../docs_src/separate_openapi_schemas/tutorial001.py[ln:1-9]!}
# Code below omitted 👇
```
<details>
<summary>👀 Full file preview</summary>
```Python
{!> ../../docs_src/separate_openapi_schemas/tutorial001.py!}
```
</details>
////
````
In the new syntax, that is replaced with this:
````markdown
{* ../../docs_src/separate_openapi_schemas/tutorial001_py310.py ln[1:7] hl[7] *}
````
* The only file that needs to be included and defined is the first one, and the lines to include and highlight are also needed for the first file.
* All the other file includes, full file preview, comments, etc. are generated automatically.
---
An example PR: https://github.com/fastapi/fastapi/pull/12555
## Help
Do you want to help? Please do!
Remember **it should be done as one PR per page updated.**
If you see a page that doesn't fit these cases, leave it as is, I'll take care of it later.
Before submitting a PR, check if there's another one already handling that file.
Please name the PR including the file path, for example:
````markdown
📝 Update includes for `docs/tutorial/create-db-and-table.md`
```` | 0easy
|
Title: NotImplementedError when calling toGremlin in FoldedContextField
Body: Hey guys!
First of all, thanks a lot for this awesome work. I was testing the compiler in combination with Gremlin. The following GraphQL is mentioned in your Readme, but causes a NotImplementedError when trying to generate a Gremlin statement out of it:
```
Animal {
name @output(out_name: "name")
out_Animal_ParentOf @fold {
_x_count @filter(op_name: ">=", value: ["$min_children"])
@output(out_name: "number_of_children")
name @filter(op_name: "has_substring", value: ["$substr"])
@output(out_name: "child_names")
}
}
```
Is it a bug or is it just not implemented.
Many thanks! | 0easy
|
Title: tb contains method
Body: > Also, it would be nice if the following check is supported:
> 'happy_fraction' in tb
Implement a `__contains__` method which returns whether the variable/function is present in the notebook or not (in memory).
_Originally posted by @rohitsanj in https://github.com/nteract/testbook/issues/96#issuecomment-819540146_ | 0easy
|
Title: Notebook/script templates generated by "ploomber scaffold" should contain a cell with the autoreload magic
Body: If `ploomber scaffold` finds a `pipeline.yaml` if checks all `tasks[*].sources` and creates files for all tasks whose source is missing. e.g.,
```yaml
tasks:
- source: scripts/some-script.py
product: output.ipynb
```
If `scripts/some-script.py` does not exist, it creates one.
The created script contains a basic structure for users to get started. It'd be useful to add the following as the first cell:
```python
# uncomment the next to lines to enable auto-reloading
# for more info: https://ipython.readthedocs.io/en/stable/config/extensions/autoreload.html
# %load_ext autoreload
# %autoreload 2
```
This is a pretty handy but relatively unknown feature of IPython that auto-reloads modified code.
Documentation: https://ipython.readthedocs.io/en/stable/config/extensions/autoreload.html
File to modify: https://github.com/ploomber/ploomber/blob/ac915ea998f4da0177204f2189f8f608f3404fc6/src/ploomber/resources/ploomber_add/task.py
This is the line that triggers file creation when executing "ploomber scaffold": https://github.com/ploomber/ploomber/blob/ac915ea998f4da0177204f2189f8f608f3404fc6/src/ploomber/cli/cli.py#L67
Add tests here: https://github.com/ploomber/ploomber/blob/master/tests/cli/test_scaffold.py
| 0easy
|
Title: Topic 9 Kaggle template broken
Body: https://www.kaggle.com/kashnitsky/topic-9-part-2-time-series-with-facebook-prophet

| 0easy
|
Title: Numeric factor in environment name wrongly causes "conflicting factors" error
Body: https://github.com/tox-dev/tox/pull/2597 had the unintended effect of interpreting a numeric factor (e.g. `2105`) as a candidate base python, breaking previously valid environment names (e.g. `py37-foo-2105`) by raising `ValueError: conflicting factors py37, 2105 in py37-foo-2105`.
Example GitHub workflow build: https://github.com/galaxyproject/planemo/actions/runs/3654111042/jobs/6174245903 for `tox.ini` https://github.com/galaxyproject/planemo/blob/nsoranzo-patch-1/tox.ini
_Originally posted by @nsoranzo in https://github.com/tox-dev/tox/pull/2597#discussion_r1044065606_ | 0easy
|
Title: Fix non-clicable checkbox label in user creation form in Admin side
Body: The user creation form checkbox label for email user credentials isn't clickable. Small fix. | 0easy
|
Title: [FEATURE REQUEST] Add temporal_hidden_past and temporal_hidden_future hyperparams to TiDEModel
Body: The implementation of TiDE currently uses `hidden_size` for both the hidden layer of the covariates encoder (applied timestep-wise) and for the dense encoder (applied after flattening). This does not seem right as in most cases this would result in either absurd expansion in the covariate encoder or extreme compression in the dense encoder.
In the original Google Research paper the authors do not seem to comment on the hidden size of the covariate encoder, but reviewing their Github repo it appears that it was [fixed to 64.](https://github.com/google-research/google-research/blob/master/tide/train.py)
```
# Create model
model_config = {
'model_type': 'dnn',
'hidden_dims': [FLAGS.hidden_size] * FLAGS.num_layers,
'time_encoder_dims': [64, 4], # hidden, output
'decoder_output_dim': FLAGS.decoder_output_dim,
'final_decoder_hidden': FLAGS.final_decoder_hidden,
'batch_size': dtl.batch_size,
}
```
I think this should be exposed as a hyperparameter as in most cases it should be much smaller than the dense encoder `hidden_size`, as it seems to have been in the original experiments . | 0easy
|
Title: Missing section
Body: Missing section 2.3 and task about kde plot of the height feature. | 0easy
|
Title: update the docs about http error handling
Body: in 0.8 the http error handling was removed from splinter.
the docs about it should be updated or removed: https://github.com/cobrateam/splinter/blob/master/docs/http-status-code-and-exception.rst | 0easy
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.