text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: Layout as list of components does not work when layout is a function
Body: **Describe your context**
```
dash 2.17.1
dash_ag_grid 31.2.0
dash-bootstrap-components 1.6.0
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-mantine-components 0.12.1
dash-table 5.0.0
dash-testing-stub 0.0.2
```
**Describe the bug**
https://github.com/plotly/dash/pull/2795 enabled you to pass a list of components to `app.layout`. However, this does not work when `app.layout` is set to a function:
```
import datetime
from dash import Dash, html
def layout():
return [html.H1(datetime.datetime.now())]
app = Dash(__name__)
app.layout = layout
app.run()
```
Gives:
```
Traceback (most recent call last):
File "/Users/antony_milne/Library/Application Support/JetBrains/PyCharm2024.1/scratches/scratch_51.py", line 11, in <module>
app.layout = layout
^^^^^^^^^^
File "/Users/antony_milne/Library/Application Support/hatch/env/virtual/vizro/fm3ubPZu/vizro/lib/python3.11/site-packages/dash/dash.py", line 734, in layout
[simple_clone(c) for c in layout_value._traverse_ids()],
^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'list' object has no attribute '_traverse_ids'
```
I see that the main motivation for #2795 was that it simplified examples for beginners, so this is not such an important case because using a function for `layout` is already more advanced. But still I think this should probably work.
The root of the problem is that the code in `Dash.layout` that handles the case that layout is a function still expect it to return a single Dash component. One possible fix would be to alter this line:
https://github.com/plotly/dash/blob/bbd013cd9d97d8c41700b240dd95c32e6b875998/dash/dash.py#L729
This expects a single Dash component to be returned rather than a list so it works if you just wrap it in an `html.Div`. | 0easy
|
Title: Windows GUI for matchering (Claude created)
Body: PySimpleGUI-based GUI for Matchering
I've created a simple GUI for Matchering using PySimpleGUI. This solution provides an easy-to-use interface for those who prefer a graphical approach over command-line usage.
Features:
- Select target and reference tracks via file browser
- Choose output formats (16-bit WAV, 24-bit WAV, or both)
- Simple 'Process' button to start Matchering
- Error handling and completion notifications
Requirements:
- Python 3.x
- PySimpleGUI
- Matchering
A. Install matchering as per instructions
python -m pip install -U matchering
B. Install GUI
pip install PySimpleGUI matchering
C. Save the code below in a file called
matchering_gui.py
in your matchering folder.
-----------------------------------------------------------------------------------------------
> import PySimpleGUI as sg
> import matchering as mg
>
> def create_window():
> layout = [
> [sg.Text('Target Track:'), sg.Input(key='-TARGET-'), sg.FileBrowse()],
> [sg.Text('Reference Track:'), sg.Input(key='-REFERENCE-'), sg.FileBrowse()],
> [sg.Text('Output Formats:')],
> [sg.Checkbox('16-bit WAV', default=True, key='-16BIT-'),
> sg.Checkbox('24-bit WAV', default=False, key='-24BIT-')],
> [sg.Button('Process'), sg.Button('Exit')]
> ]
> return sg.Window('Matchering GUI', layout)
>
> def main():
> window = create_window()
>
> while True:
> event, values = window.read()
> if event == sg.WINDOW_CLOSED or event == 'Exit':
> break
> if event == 'Process':
> target = values['-TARGET-']
> reference = values['-REFERENCE-']
> results = []
> if values['-16BIT-']:
> results.append(mg.pcm16(f"{target.rsplit('.', 1)[0]}_master_16bit.wav"))
> if values['-24BIT-']:
> results.append(mg.pcm24(f"{target.rsplit('.', 1)[0]}_master_24bit.wav"))
>
> if not target or not reference or not results:
> sg.popup('Please fill in all fields and select at least one output format.')
> continue
>
> try:
> mg.log(print)
> mg.process(
> target=target,
> reference=reference,
> results=results,
> )
> sg.popup('Processing complete!')
> except Exception as e:
> sg.popup_error(f'An error occurred: {str(e)}')
>
> window.close()
>
> if __name__ == '__main__':
> main()
------------------------------------------------------------------------------------------------
Usage:
1. Double click the file to run the script: `python matchering_gui.py`
2. Use the GUI to select files and process audio
3. Enjoy!
This GUI provides a user-friendly alternative to command-line usage and might be helpful for users who prefer graphical interfaces. It's ugly as sin, but does the job.
I take no responsibility at all, it's all Claude Sonnet 3.5's fault. :)
| 0easy
|
Title: Zombied action leds to wrong (completed) workflow run state
Body: **Describe the bug**
Hello, I created a workflow with four actions to be executed sequentially. When the third action takes a long time to complete, the fourth action is not executed, and the logs simply output Workflow completed.
The reason seems to be that the current action was removed from running_jobs_store before the next action was added to ready_jobs_queue.
I'd like to ask when the scheduled execution feature will be supported, as I saw it in the documentation but couldn't find it on the configuration page.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| 0easy
|
Title: Union function to find the intersection of time series
Body: Hello!
I think it would be incredibly valuable to have a function that finds the intersection of a list of time series. This would be something akin to xarray's "align" function, which finds the intersection across a list of variables.
I commonly find I have to manually convert my TimeSerries back to an xarray data array, run an align, then convert back to a TimeSeries. For instance:
```
import xarray as xr
from darts.timeseries import TimeSeries
ts1 = TimeSeries.from_csv('file1.csv', time_col='time')
ts2 = TimeSeries.from_csv('file2.csv', time_col='time')
ts3 = TimeSeries.from_csv('file3.csv', time_col='time')
ts1_, ts2_, ts3_ = xr.align( ts1.data_array(), ts2.data_array(), ts3.data_array(), exclude = 'component')
ts1 = TimeSeries(ts1_)
ts2 = TimeSeries(ts2_)
ts3 = TimeSeries(ts3_)
```
So something like:
```
ts1 = TimeSeries.from_csv('file1.csv', time_col='time')
ts2 = TimeSeries.from_csv('file2.csv', time_col='time')
ts3 = TimeSeries.from_csv('file3.csv', time_col='time')
ts1, ts2, ts3 = Timeseries.intersection(ts1, ts2, ts3)
```
if the intersection doesn't exist, it could raise a warning like: `Warning: there is no overlapping times between the time series`.
Another option would also be an option for "stack" to take an argument to do this intersection:
```
full_ts = ts1.stack(ts2, intersection = True).stack(ts3, intersection = True)
```
And finally, if this could be added in the pipeline API, that would be excellent!
Apologies if these features already exists! I am new to the package, but I couldn't find an example or something like this in the API. I am loving it so far though! | 0easy
|
Title: Modify ResisdualsPlot histogram to optionally use a PDF
Body: Because the training data set is usually much larger than the test data set, adding an option to normalize the histogram by using a PDF instead of pure frequency may make comparing the train and test set residuals distribution easier.
Right now the histogram is drawn on the plot if the argument `hist=True`; we can change this argument to accept a string or a boolean, e.g. `hist=True` or `hist='density'` draws the PDF, `hist='frequency'` draws the normal histogram, and `hist=None` or `hist=False` does not draw the histogram at all. Alternatively, we can simply add another boolean argument, `density=True` to the visualizer if that is more understandable.
The histogram is drawn on [line 508](https://github.com/DistrictDataLabs/yellowbrick/blob/develop/yellowbrick/regressor/residuals.py#L507), to use the PDF, pass `density=True` to the `hist()` function. | 0easy
|
Title: status_code requires __ne__ to work in python 2.7.x
Body: Quoting to the 2.7.x data model [1]
"There are no implied relationships among the comparison operators. The truth of x==y does not imply that x!=y is false. Accordingly, when defining **eq**(), one should also define **ne**() so that the operators will behave as expected."
So in python 2.7.x one cannot do the following
browser = Browser()
browser.visit("http://www.google.com")
if browser.status_code != 200:
print("ERROR")
This however will work in python 3.x because the data model says **ne** will automatically invert the return code of **eq**
[1] https://docs.python.org/2/reference/datamodel.html
| 0easy
|
Title: Remove support for HttpCompressionMiddleware subclasses without stats
Body: Deprecated in 2.5.0. | 0easy
|
Title: Support limit -1 in indices
Body: Support limit=-1 in document indices, returning all matches
```python
docs, scores = doc_index.find(query, search_field='tensor', limit=-1)
assert len(docs) == len(scores) == doc_index.num_docs()
```
| 0easy
|
Title: What is correct behavior of keras.ops.eye?
Body: Hi,
I have a question about keras.ops.eye of Keras3.
With Tensorflow, the keras.ops.eye accepts integers and floats.
However, with torch and jax, only integers are accepted.
Which behavior is correct?
There is no limitation description on documents.
Because Keras 2 accepts integers and floats, I was confused with it during migration. | 0easy
|
Title: Add/move `_differentiable_clipping`, ` _differentiable_polynomial_floor` and ` _differentiable_polynomial_rounding` to our utilities functions
Body: for a further pr, i think having `_differentiable_clipping`, ` _differentiable_polynomial_floor` and ` _differentiable_polynomial_rounding` into our utilities can be usefully :)
_Originally posted by @johnnv1 in https://github.com/kornia/kornia/pull/2748#discussion_r1464089248_
| 0easy
|
Title: Allow `display_skipped_hosts` to be configured via vars
Body: ### Summary
Allow callback plugins to be dynamically configured via vars, specifically in this case, allow `display_skipped_hosts` to be configured dynamically
### Issue Type
Feature Idea
### Component Name
lib/ansible/plugins/callback/default.py
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | 0easy
|
Title: Feature: Add DI for ASGI routes
Body: **Is your feature request related to a problem? Please describe.**
My app may have different dependencies that I want check using k8s probes (database, s3, etc)
**Describe the solution you'd like**
I need possibility using DI for access to my dependency
**Feature code example**
```python
from faststream import FastStream, Depends
from faststream.rabbit import RabbitBroker
broker = RabbitBroker("amqp://guest:guest@localhost:5672/")
app = FastStream(broker)
def simple_dependency() -> int:
return 1
async def readiness(scope, receive, send, d: int = Depends(simple_dependency)):
assert d == 1
return AsgiResponse(b"", status_code=200)
app = FastStream(broker).as_asgi(
asgi_routes=[
("/readiness", readiness),
],
asyncapi_path="/docs",
)
``` | 0easy
|
Title: [BUG] Unclear error when client cannot connect to server
Body: A user report seeing exception:
>> ConnectionError: HTTPSConnectionPool(host='###.###.com', port=443): Max retries exceeded with url: /api-token-auth/ (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x000002047EF726C8>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed'))
On connection issues during `register`/`plot`, we should provide a more understandable error | 0easy
|
Title: How to add color legend for predicted masks in semantic segmentation?
Body: Thank you very much for this great work. The tools and tutorials are really great to follow, and they encourage people from a non-technical background like me to enjoy CV and get a taste of deep learning.
May I have a question about plotting predicted masks in the semantic segmentation, please. I was following [this official tutorial](https://gluon-cv.mxnet.io/build/examples_segmentation/demo_psp.html). After predicting using a pre-trained model, I want to plot the masks together with a legend bar to indicate which color represents what class. Like a blue square with the text "water". Is there an easy way to add the legend bar and associated texts? I found a [repo](https://github.com/RomRoc/deeplab_retrain_ade20k_colab) that may provide a solution by hard coding a lot, but it looks difficult for me so far.
Since there is a handy function `get_color_pallete(predict, 'ade20k')` to help transform the mask color to 'ade20k', is it possible to enhance one of the utility functions to add a color legend with text?
| 0easy
|
Title: Cells sometimes switch columns in app view
Body: ### Describe the bug
Editing a notebook in app view — sometimes cells that are supposed to be in one column are rendered in the other.
Example:
<img width="1390" alt="Image" src="https://github.com/user-attachments/assets/4f391ac1-306d-4b41-8e13-66475c95f5e7" />
Expected: In app view, left column says hello, right column says Title, bye.
Actual:
<img width="1101" alt="Image" src="https://github.com/user-attachments/assets/cfc30b24-6478-4e41-9727-22f2a44860d7" />
### Environment
<details>
```
{
"marimo": "0.10.16",
"OS": "Darwin",
"OS Version": "22.5.0",
"Processor": "arm",
"Python Version": "3.12.4",
"Binaries": {
"Browser": "131.0.6778.265",
"Node": "v21.5.0"
},
"Dependencies": {
"click": "8.1.8",
"docutils": "0.21.2",
"itsdangerous": "2.2.0",
"jedi": "0.19.2",
"markdown": "3.7",
"narwhals": "1.23.0",
"packaging": "24.2",
"psutil": "6.1.1",
"pygments": "2.19.1",
"pymdown-extensions": "10.14.1",
"pyyaml": "6.0.2",
"ruff": "0.9.2",
"starlette": "0.45.2",
"tomlkit": "0.13.2",
"typing-extensions": "4.12.2",
"uvicorn": "0.34.0",
"websockets": "14.2"
},
"Optional Dependencies": {}
```
</details>
### Code to reproduce
```python
import marimo
__generated_with = "0.10.16"
app = marimo.App(width="columns")
@app.cell(column=0, hide_code=True)
def _(mo):
mo.md(
"""
hello
hello
hello
hello
"""
)
return
@app.cell(column=1, hide_code=True)
def _(mo):
mo.md("""# Title""")
return
@app.cell
def _(mo):
mo.md("""bye""")
return
@app.cell
def _():
import marimo as mo
return (mo,)
if __name__ == "__main__":
app.run()
``` | 0easy
|
Title: Improve printout for Golden Features
Body: There should be print out to console with features generated in Golden Features step. | 0easy
|
Title: Link text incorrect in SKIP 1
Body: ### Description:
> Changes to this governance model or our mission, vision, and values require a [Improvement proposals (SKIPs)](https://scikit-image.org/docs/stable/skips/1-governance.html#skip) and follow the decision-making process outlined above, unless there is unanimous agreement from core developers on the change.
Does not parse, should be "require an improvement proposal (SKIP)"
### Way to reproduce:
_No response_
### Version information:
_No response_ | 0easy
|
Title: Boolean issues in config.yaml
Body: i keep getting that typo errors, for example , sintax is completly fine, i performed installation twice, keep getting same error


| 0easy
|
Title: [BUG] gridsearch with RegressionModel
Body: I am trying to implement grid search for 3 parameters in the elasticnet regression model from sklearn and wrapping the darts RegressionModel around that. Based on the documentation of [grid search](https://unit8co.github.io/darts/generated_api/darts.models.forecasting.regression_model.html?highlight=regression%20model%20grid%20search#darts.models.forecasting.regression_model.RegressionModel.gridsearch), this is how I initialised the grid search:
```
grid_params = {
'max_iter': [1, 5, 10, 50, 100, 200],
'alpha': [1e-5, 1e-4, 1e-3, 1e-2, 1e-1, 0.0, 1.0, 10.0, 100.0],
'l1_ratio': [0.01, 0.1, 0.3, 0.6, 0.9, 1]
}
sklearn_elasticnet_model = make_pipeline(
ElasticNet(random_state = 42)
)
elasticnet_model = RegressionModel(
lags=target_lags,
lags_past_covariates=past_cov_lags,
lags_future_covariates=future_cov_lags,
output_chunk_length=forecast_horizon,
multi_models=True,
model=sklearn_elasticnet_model
)
elasticnet_model.gridsearch(grid_params,
series=target_series_sample,
future_covariates=future_cov_sample,
forecast_horizon=24,
stride=24)
```
However, I am getting this error: `TypeError: RegressionModel.__init__() got an unexpected keyword argument 'max_iter'`. Am I implementing it wrongly? | 0easy
|
Title: Unexpected numerical token
Body: ```xsh
echo $123
# Unexpected token: TokenInfo(type=55 (OP), string='$123', start=(1, 7), end=(1, 11), line='![echo $123]\n')
showcmd echo $123
# ['echo', "Unexpected token: TokenInfo(type=55 (OP), string='$123', start=(1, 15), end=(1, 19), line='![showcmd echo $123]\\n')"]
$(echo $123)
# "Unexpected token: TokenInfo(type=55 (OP), string='$123', start=(1, 7), end=(1, 11), line='$(echo $123)\\n')"
```
Here instead of returning error as result we need to do the same as:
```xsh
echo $qwe
# $qwe
```
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| 0easy
|
Title: bash script for ploomber setup
Body: We've noticed that users struggle setting up ploomber, this mostly happens if they install it in an existing virtual environment that has many dependencies, if they have an outdated pip version, among other reasons.
an alternative would be to have a shell script (and possibly a script that works on windows as well) that installs ploomber in a clean environment, and do sanity checks (like run a sample pipeline), to ensure the user has a working environment. And if it isn't possible, suggest asking for help in https://ploomber.io/community | 0easy
|
Title: Handle robots.txt files not utf-8 encoded
Body: ## Summary
`robots.txt` files which are not `utf-8` encoded make `scrapy` raise an `UnicodeDecodeError` atm.
```
.venv/lib/python3.11/site-packages/scrapy/robotstxt.py", line 15, in decode_robotstxt
robotstxt_body = robotstxt_body.decode("utf-8")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xdf in position 46: invalid continuation byte
```
## Motivation
Prevents from being able to manage all scrapers in an automated fashion, like described in "Describe alternatives you've considered".
## Describe alternatives you've considered
Check `robots.txt` file manually for the scraper in question, disable `robots.txt` file parsing globally using `ROBOTSTXT_OBEY`. Prevents to being able to process all scrapers in an automated fashion however.
## Additional context
| 0easy
|
Title: Modify regex
Body: ollama:deepseek-coder-v2
Generates code like bellow
```
```python
def add(a, b):
if b == 0:
return float('inf') # Using infinity to avoid division by zero
else:
return a / b
add(1,0)
```
where there is space before ```python causing the output to be incorrect. Need to modify the lang_indicator as follows.
lang_indicator = r"^\s*```[a-zA-Z0-9]*\s*\n" | 0easy
|
Title: [Readme] 发布Issue前必读指南 / Essential Guidelines Before Submitting an Issue
Body: ## 在发布`issue`时,请选择合适的模板,并完成检查表中的项目(将`[ ]`修改为`[x]`表示已检查该项)
为了确保有效沟通和问题的及时解决,发布`issue`前,请务必阅读并理解[提问的智慧](https://github.com/ryanhanwu/How-To-Ask-Questions-The-Smart-Way/blob/main/README-zh_CN.md#%E6%8F%90%E9%97%AE%E7%9A%84%E6%99%BA%E6%85%A7)。
请注意以下事项:
- 本项目的`issue`主要用于处理`bug`和接收`feature request`。对于操作、配置问题,客户端问题等,建议您在[discussions](https://github.com/vvbbnn00/WARP-Clash-API/discussions)中发布,以便获得更合适的帮助。不符合要求的`issue`可能会被关闭。
- 如果您要报告一个`bug`,请确保该问题确实是由本项目引起的,而非由Cloudflare WARP自身服务限制、客户端不支持等其他因素造成的。如果您不确定,也可以在[discussions](https://github.com/vvbbnn00/WARP-Clash-API/discussions)中咨询。
- 发起`issue`前,请先检查是否已有相关问题存在,以避免重复提交,感谢您的理解和合作!
- 在填写`issue`模板时,您可以根据需要选择性地填写内容,但**请确保保留检查表、问题描述等必要部分**。
- 请理解项目开发者也有自己的主要工作,可能无法对每个问题都做出及时回复。但您仍然可以在社区中寻求帮助,感谢您的耐心等待!
最后,非常感谢您对本项目的贡献和支持!
## When opening an `issue`, please select the appropriate template and complete the items in the checklist (change `[ ]` to `[x]` to indicate that the item has been checked)
To ensure effective communication and timely resolution of issues, please make sure to read and understand [How To Ask Questions The Smart Way](http://www.catb.org/~esr/faqs/smart-questions.html) before posting an `issue`.
Please note the following:
- `Issues` in this project are mainly for handling `bugs` and receiving `feature requests`. For operational, configuration issues, and client-related questions, it is recommended to post in [discussions](https://github.com/vvbbnn00/WARP-Clash-API/discussions) to get more appropriate help. `Issues` that do not meet the requirements may be closed.
- If you are reporting a `bug`, please make sure that the issue is indeed caused by this project and not by other factors such as Cloudflare WARP's own service limitations or lack of client support. If you are not sure, you can also consult in [discussions](https://github.com/vvbbnn00/WARP-Clash-API/discussions).
- Before creating an `issue`, please check if there is already a related issue to avoid duplicate submissions. Thank you for your understanding and cooperation!
- When filling out the `issue` template, you can selectively fill in the content as needed, but **please ensure that the checklist, problem description, and other necessary parts are retained**.
- Please understand that the project developers also have their main jobs and may not be able to respond to every issue promptly. However, you can still seek help in the community. Thank you for your patience!
Finally, thank you very much for your contribution and support to this project!
| 0easy
|
Title: Docs: Add guard example to JWT docs
Body: ### Summary
Currently in the [JWT docs](https://docs.litestar.dev/latest/usage/security/jwt.html) there are a few references regarding how to access the 'User' object during and after being authenticated. These boil down to:
a) queried using the token
b) directly from the request (which can be easily assumed to be attached via retrieve_user_handler prior to going to the api path)
However there are other instances where user details need to be extracted from the `connection` object (such as in role-based guards).
```py
def admin_guard(connection: ASGIConnection, _: BaseRouteHandler) -> None:
if not connection.user.is_admin:
raies NotAuthorizedException()
```
A gap in knowledge between the page on JWTs and guards is that it's not made entirely clear _how_ user gets attached to connection. I would like to suggest that an example guard is added to the JWT docs with a comment explaining that the Auth object automatically attaches it for you based on the object returned from `retrieve_user_handler`.
It also isn't made abundantly clear that the TypeVar provided to the Auth object directly corresponds to the retrieve_user_handler. For a little while, I was actually setting the TypeVar based on my login response and wondering why it wasn't working. A silly mistake in hindsight, but I believe a simple comment could have saved me from it! | 0easy
|
Title: Cannot create new model in database
Body: **Problem 1: When i create a new model, but when i run docker-compose. It's not exists in my database.**
```python
from datetime import UTC, datetime
from sqlalchemy.orm import Mapped, mapped_column
from sqlalchemy import DateTime, ForeignKey, String,Integer
from ..core.db.database import Base
class Template(Base):
__tablename__ = "template"
id: Mapped[int] = mapped_column("id", autoincrement=True, nullable=False, unique=True, primary_key=True, init=False)
text: Mapped[str] = mapped_column(String)
description: Mapped[str] = mapped_column(String)
created_by: Mapped[int] = mapped_column(Integer, nullable=False)
updated_by: Mapped[int] = mapped_column(Integer, nullable=False)
deleted_by: Mapped[int] = mapped_column(Integer, nullable=False)
created_at: Mapped[datetime] = mapped_column(DateTime(timezone=True), default_factory=lambda: datetime.now(UTC))
updated_at: Mapped[datetime | None] = mapped_column(DateTime(timezone=True), default=None)
deleted_at: Mapped[datetime | None] = mapped_column(DateTime(timezone=True), default=None)
is_deleted: Mapped[bool] = mapped_column(default=False, index=True)
```
**Screenshots**

**Problem 2: When i migrating, my database has been delete all table. This is log migration**
```
poetry run alembic revision --autogenerate
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
INFO [alembic.ddl.postgresql] Detected sequence named 'rate_limit_id_seq' as owned by integer column 'rate_limit(id)', assuming SERIAL and omitting
INFO [alembic.autogenerate.compare] Detected removed index 'ix_rate_limit_tier_id' on 'rate_limit'
INFO [alembic.autogenerate.compare] Detected removed table 'rate_limit'
INFO [alembic.ddl.postgresql] Detected sequence named 'token_blacklist_id_seq' as owned by integer column 'token_blacklist(id)', assuming SERIAL and omitting
INFO [alembic.autogenerate.compare] Detected removed index 'ix_token_blacklist_token' on 'token_blacklist'
INFO [alembic.autogenerate.compare] Detected removed table 'token_blacklist'
INFO [alembic.ddl.postgresql] Detected sequence named 'post_id_seq' as owned by integer column 'post(id)', assuming SERIAL and omitting
INFO [alembic.autogenerate.compare] Detected removed index 'ix_post_created_by_user_id' on 'post'
INFO [alembic.autogenerate.compare] Detected removed index 'ix_post_is_deleted' on 'post'
INFO [alembic.autogenerate.compare] Detected removed table 'post'
INFO [alembic.autogenerate.compare] Detected removed index 'ix_user_email' on 'user'
INFO [alembic.autogenerate.compare] Detected removed index 'ix_user_is_deleted' on 'user'
INFO [alembic.autogenerate.compare] Detected removed index 'ix_user_tier_id' on 'user'
INFO [alembic.autogenerate.compare] Detected removed index 'ix_user_username' on 'user'
INFO [alembic.autogenerate.compare] Detected removed table 'user'
INFO [alembic.autogenerate.compare] Detected removed table 'tier'
Generating /src/migrations/versions/d02f4a524850_.py ... done
```
| 0easy
|
Title: Add `nunique`
Body: ### Is your feature request related to a problem?
From https://github.com/pydata/xarray/issues/9544#issuecomment-2372685411
> Though perhaps we should add nunique along a dimension implemented as sort along axis, succeeding-elements-are-not-equal along axis handling NaNs, then sum along axis.
> xref [pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.nunique.html](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.nunique.html)
I think I'd add it to https://github.com/pydata/xarray/blob/main/xarray/util/generate_aggregations.py
| 0easy
|
Title: set path where to save models
Body: Path should be set in config file. Right now the path is hard coded to '/tmp'. | 0easy
|
Title: Allow spaces in AUTO_SUGGEST_IN_COMPLETIONS?
Body: I'd like to allow spaces to be within the suggestion feature in completions however currently it strips out all after the first space. I tried messing around with `completer.py` and `auto_suggest` for a while but gave up because I couldn't get it working consistantly | 0easy
|
Title: week 3 workbooks seed issue
Body: Right now code in https://github.com/Yorko/mlcourse_open/blob/master/jupyter_notebooks/topic03_decision_trees_knn/topic3_trees_knn.ipynb `In[3]` is `np.seed = 7` but this seems to be typo and should be `np.random.seed(7)`? | 0easy
|
Title: chore: use uv GHA to get it install manual installation
Body: I think, we should use https://github.com/astral-sh/setup-uv instead of manuall installation https://github.com/airtai/faststream/blob/main/.github/workflows/pr_tests.yaml#L68 everywhere | 0easy
|
Title: Code listing for app.py in "look” Tutorial has a bug
Body: In the [Serving images](https://falcon.readthedocs.io/en/stable/user/tutorial.html#serving-images) section, the code listing for `app.py` tries to import the `images` module as `import images`. I believe this should be `from look import images` or depending on one's preferences `import look.images` and refactor references to `images` as `look.images`. I prefer the former:
```python
import os
import falcon
from look import images
def create_app(image_store):
api = falcon.API()
api.add_route('/images', images.Collection(image_store))
api.add_route('/images/{name}', images.Item(image_store))
return api
def get_app():
storage_path = os.environ.get('LOOK_STORAGE_PATH', '.')
image_store = images.ImageStore(storage_path)
return create_app(image_store)
```
| 0easy
|
Title: Bug(docs): AA intersphinx 404
Body: ### Description
> intersphinx inventory 'https://docs.advanced-alchemy.jolt.rs/latest/objects.inv' not fetchable due to <class 'requests.exceptions.HTTPError'>: 404 Client Error: Not Found for url: https://docs.advanced-alchemy.jolt.rs/latest/objects.inv
### URL to code causing the issue
_No response_
### MCVE
```python
# Your MCVE code here
```
### Steps to reproduce
```bash
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
```
### Screenshots
```bash
""
```
### Logs
_No response_
### Litestar Version
Main
### Platform
- [X] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | 0easy
|
Title: Improve developer documentation on how to install Firefox
Body: Currently it's a bit sparse: https://nbgrader.readthedocs.io/en/stable/contributor_guide/installation_developer.html#installing-firefox-headless-webdriver | 0easy
|
Title: Add Docstrings
Body: ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Is your feature request related to a problem? Please describe.
Having docstrings for all methods and classes will help new contributors understand the codebase and also provide a method of autogenerated API docs. Right now only some methods and classes have docstrings.
### Describe the solution you'd like.
Example of already implemented docstring.
```
def get_aggs(self,
symbol: str,
multiplier: int,
timespan: str,
_from: str,
to: str) -> Aggs:
"""
:param symbol: str eg AAPL
:param multiplier: must be 1
:param timespan: day or minute
:param _from: yyyy-mm-dd
:param to: yyyy-mm-dd
:return:
"""
resp = self.data_get('/aggs/ticker/{}/range/{}/{}/{}/{}'.format(
symbol, multiplier, timespan, _from, to
))
return self.response_wrapper(resp, Aggs)
```
### Describe an alternate solution.
_No response_
### Anything else? (Additional Context)
_No response_ | 0easy
|
Title: cancelled vs canceled - standardize spelling in codebase
Body: ### Please confirm the following
- [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [X] I understand that AWX is open source software provided for free and that I might not receive a timely response.
- [X] I am **NOT** reporting a (potential) security vulnerability. (These should be emailed to `security@ansible.com` instead.)
### Bug Summary
There are a couple of places that still use "cancelled" even though AWX appears to have standardized "canceled"
https://github.com/search?q=repo%3Aansible%2Fawx%20cancelled&type=code <- only one in particular looks to be problematic from a functionality perspective within the UI for WSRelay.
I have not tested this to know if it impacts functionality though.
### AWX version
24.3.1
### Select the relevant components
- [X] UI
- [ ] UI (tech preview)
- [ ] API
- [ ] Docs
- [ ] Collection
- [ ] CLI
- [X] Other
### Installation method
docker development environment
### Modifications
no
### Ansible version
n/a
### Operating system
n/a
### Web browser
n/a
### Steps to reproduce
search codebase for "cancelled" or "canceled" and compare
### Expected results
n/a
### Actual results
n/a
### Additional information
_No response_ | 0easy
|
Title: [BUG] Internal error when loading schema from a non-existent file
Body: ```
> st run foo.json --base-url=http://example.com
Test Execution Error
An internal error occurred during the test run
AttributeError: 'NoneType' object has no attribute 'url'
``` | 0easy
|
Title: Flexible options for color palette.
Body: Currently, the color palette is hardcoded in visuals.py:
https://github.com/MLWave/kepler-mapper/blob/0dfa7f736bd4218e8996888785b9d2adc63d5344/kmapper/visuals.py#L8-L15
and again in kmapper.js
https://github.com/MLWave/kepler-mapper/blob/0dfa7f736bd4218e8996888785b9d2adc63d5344/kmapper/static/kmapper.js#L25-L32
I'd like to see this
- [ ] duplication fixed and
- [ ] allow the palette to be an argument so users can choose a custom palette.
Reducing the duplication should be straight forward. Allowing arbitrary palettes might be more difficult. Does the current code assume palettes of a certain length? | 0easy
|
Title: Pycharm don't like `instance(docs, DocumentArray[MyDoc])`
Body: # Context
this code run
```python
from docarray import BaseDocument, DocumentArray
class MyDoc(BaseDocument):
name: str
docs = DocumentArray[MyDoc]([MyDoc(name='hello') for i in range(4)])
assert isinstance(docs, DocumentArray[MyDoc])
```
but Pycharm doesn't like it

Probably because it does not detect that `DocumentArray[MyDoc]` is just a normal class | 0easy
|
Title: [BUG] Error while resetting index
Body: **Describe the bug**
Getting this while resetting the index:
```
Ignoring exception in on_application_command_error
Traceback (most recent call last):
File "/home/xxx/.local/lib/python3.9/site-packages/discord/commands/core.py", line 124, in wrapped
ret = await coro(arg)
File "/home/xxx/.local/lib/python3.9/site-packages/discord/commands/core.py", line 978, in _invoke
await self.callback(self.cog, ctx, **kwargs)
File "/home/xxx/xxx/cogs/commands.py", line 182, in delete_all_conversation_threads
await self.converser_cog.delete_all_conversation_threads_command(ctx)
File "/home/xxx/gpt2/cogs/text_service_cog.py", line 855, in delete_all_conversation_threads_command
await ctx.respond("All conversation threads in this server have been deleted.")
File "/home/xxx/.local/lib/python3.9/site-packages/discord/commands/context.py", line 286, in respond
return await self.followup.send(*args, **kwargs) # self.send_followup
File "/home/xxx/.local/lib/python3.9/site-packages/discord/webhook/async_.py", line 1745, in send
data = await adapter.execute_webhook(
File "/home/xxx/.local/lib/python3.9/site-packages/discord/webhook/async_.py", line 219, in request
raise NotFound(response, data)
discord.errors.NotFound: 404 Not Found (error code: 10008): Unknown Message
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/xxx/.local/lib/python3.9/site-packages/discord/client.py", line 378, in _run_event
await coro(*args, **kwargs)
File "/home/xxx/xxx/gpt3discord.py", line 120, in on_application_command_error
raise error
File "/home/xxx/.local/lib/python3.9/site-packages/discord/bot.py", line 1114, in invoke_application_command
await ctx.command.invoke(ctx)
File "/home/xxx/.local/lib/python3.9/site-packages/discord/commands/core.py", line 375, in invoke
await injected(ctx)
File "/home/xxx/.local/lib/python3.9/site-packages/discord/commands/core.py", line 124, in wrapped
ret = await coro(arg)
File "/home/xxx/.local/lib/python3.9/site-packages/discord/commands/core.py", line 1312, in _invoke
await command.invoke(ctx)
File "/home/xxx/.local/lib/python3.9/site-packages/discord/commands/core.py", line 375, in invoke
await injected(ctx)
File "/home/xxx/.local/lib/python3.9/site-packages/discord/commands/core.py", line 132, in wrapped
raise ApplicationCommandInvokeError(exc) from exc
discord.errors.ApplicationCommandInvokeError: Application Command raised an exception: NotFound: 404 Not Found (error code: 10008): Unknown Message
Traceback (most recent call last):
File "/home/xxx/xxx/models/index_model.py", line 177, in reset_indexes
EnvService.find_shared_file(f"indexes/{user_id}_search")
File "/home/xxx/xxx/services/environment_service.py", line 73, in find_shared_file
raise ValueError(f"Unable to find shared data file {file_name}")
ValueError: Unable to find shared data file indexes/417185533564026882_search
```
**To Reproduce**
Steps to reproduce the behavior:
- Add a index
- Try resetting
**Expected behavior**
Reset to be successful, but seems not
**Screenshots**
N/A
**Additional context**
N/A
| 0easy
|
Title: Method annotations incorrectly decorate inherited consumer methods
Body: ## Precondition
Consider the following consumer class:
```python
class GitHub(uplink.Consumer):
@uplink.get("/users/{username}")
def get_user(self, username):
"""Get a single user."""
```
Use any method annotation. For this example, I'll create a custom method annotation, `PrintMethodName`, that simply prints the name of the consumer method that it wraps:
```python
from uplink import decorators
class PrintMethodName(decorators.MethodAnnotation):
def modify_request_definition(self, method):
print(method.__name__)
```
## Steps to recreate
Create a subclass of `GitHub` and decorate it with the method annotation:
```python
@PrintMethodName()
class GitHubSubclass(GitHub):
pass
```
## Expected
No output to stdout, since `GitHubSubclass` doesn't define any consumer methods.
## Actual
The method annotation decorates `get_user` from the parent class, `GitHub`.
Output:
```python
get_user
``` | 0easy
|
Title: [Bug]: I give an RGB image to imsave but I don't have the right color map!
Body: ### Bug summary
When I give an RGB image as a list of list of list to imsave, I don't have the right colormap and there is no rgb colormap!
### Code for reproduction
```Python
import matplotlib.pyplot as plt
I = plt.imread("notre-dame.jpg")
m,n,p = I.shape #m=largeur, n=hauteur, p=uplet
J=[[[0 for d in range(p)] for e in range(n)] for f in range(m)]
K=[[[0 for d in range(p)] for e in range(n)] for f in range(m)]
L=[[[0 for d in range(p)] for e in range(n)] for f in range(m)]
for i in range(m) :
for j in range(n) :
J[i][j]=[int(I[i][j][0]),0,0]
K[i][j]=[0,int(I[i][j][1]),0]
L[i][j]=[0,0,int(I[i][j][2])]
fig, ax=plt.subplots(4,figsize=(28,10))
ax[0].imshow(I)
ax[1].imshow(J)
ax[2].imshow(K)
ax[3].imshow(L)
ax[0].set_axis_off()
ax[1].set_axis_off()
ax[2].set_axis_off()
ax[3].set_axis_off()
ax[0].set_title('Image complete')
ax[1].set_title('Composante rouge')
ax[2].set_title('Composante bleue')
ax[3].set_title('Composante verte')
fig.suptitle("Image de Notre-Dame", x=0.51,y=.93)
plt.imsave("Notre-Dame_rouge.png",J) #The problem is here!
plt.savefig("Notre-Dame_composante_ex1.png",dpi=750,bbox_inches="tight")
plt.savefig("Notre-Dame_composante_ex1.pdf",dpi=750,bbox_inches="tight")
plt.show()
```
### Actual outcome

### Expected outcome

The red one in a png image!
### Additional information

The original image
### Operating system
Win 11
### Matplotlib Version
3.9.2
### Matplotlib Backend
module://matplotlib_inline.backend_inline
### Python version
3.11.10
### Jupyter version
_No response_
### Installation
conda | 0easy
|
Title: Implements fill methods in GroupBy
Body: **Is your feature request related to a problem? Please describe.**
Mars now lack fill methods in GroupBy objects:
* [ ] [`GroupBy.bfill`](https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.GroupBy.bfill.html#)
* [ ] [`GroupBy.ffill`](https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.GroupBy.ffill.html#)
* [ ] [`DataFrameGroupBy.fillna`](https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.DataFrameGroupBy.fillna.html#)
**Describe the solution you'd like**
Possible to implement with `groupby(xxx).apply(...)`. Maybe possible to keep original index to reduce shuffle in data alignment. | 0easy
|
Title: 🚀OpenVINO Backend: GFIs for Further Development🚀
Body: 🚀**A great opportunity to contribute to two popular AI projects with just one PR:: [Keras 3](https://github.com/keras-team/keras) and [OpenVINO](https://github.com/openvinotoolkit/openvino).**🚀
Keras 3 enables seamless switching between supported backends—PyTorch, TensorFlow, and JAX—for both training and inference of traditional models and LLMs/GenAI pipelines.
Since Keras 3.8.0, we've introduced a preview version of the OpenVINO backend (for inference only), allowing developers to leverage OpenVINO for model predictions directly within Keras 3 workflows. Activating the OpenVINO backend requires just one line of code to run inference on Keras 3-trained models. Here’s an example for a BERT model from Keras Hub:
```python
import os
os.environ["KERAS_BACKEND"] = "openvino"
import numpy as np
import keras
import keras_hub
features = {
"token_ids": np.ones(shape=(2, 12), dtype="int32"),
"segment_ids": np.array([[0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0]] * 2),
"padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0]] * 2),
}
classifier = keras_hub.models.BertTextClassifier.from_preset("bert_base_en_uncased", num_classes=4, preprocessor=None)
output = classifier.predict(features)
```
Currently, the OpenVINO backend lacks support for some operations. Our goal is to resolve this gap and to optimize it for inference on Intel devices—including CPUs, integrated GPUs, discrete GPUs, and NPUs—by supporting as many models as possible while delivering optimal performance. We aim to make the OpenVINO backend **the No. 1 choice for model inference within the Keras 3 workflow**.
We warmly welcome you to participate in further development of the OpenVINO backend. Here is a list of good-first-issues (it will be periodically updated with new ones):
- https://github.com/openvinotoolkit/openvino/issues/29011
- https://github.com/openvinotoolkit/openvino/issues/29115
- https://github.com/openvinotoolkit/openvino/issues/29116
- https://github.com/openvinotoolkit/openvino/issues/29118
- https://github.com/openvinotoolkit/openvino/issues/29358
- https://github.com/openvinotoolkit/openvino/issues/29359
- https://github.com/openvinotoolkit/openvino/issues/29361
- https://github.com/openvinotoolkit/openvino/issues/29483
- https://github.com/openvinotoolkit/openvino/issues/29484
- https://github.com/openvinotoolkit/openvino/issues/29485
- https://github.com/openvinotoolkit/openvino/issues/29486
- https://github.com/openvinotoolkit/openvino/issues/29487
- https://github.com/openvinotoolkit/openvino/issues/29488
- https://github.com/openvinotoolkit/openvino/issues/29489
<details>
<summary>Already done issues</summary>
- https://github.com/openvinotoolkit/openvino/issues/29008
- https://github.com/openvinotoolkit/openvino/issues/29009
- https://github.com/openvinotoolkit/openvino/issues/29010
- https://github.com/openvinotoolkit/openvino/issues/29012
- https://github.com/openvinotoolkit/openvino/issues/29013
- https://github.com/openvinotoolkit/openvino/issues/29014
- https://github.com/openvinotoolkit/openvino/issues/29114
- https://github.com/openvinotoolkit/openvino/issues/29117
- https://github.com/openvinotoolkit/openvino/issues/29119
- https://github.com/openvinotoolkit/openvino/issues/29357
- https://github.com/openvinotoolkit/openvino/issues/29360
</details>
| 0easy
|
Title: Improve or replace the configuration reST table
Body: ## Problem
This page https://dynaconf.readthedocs.io/en/latest/guides/configuration.html
has a table that is written using reST syntax https://github.com/rochacbruno/dynaconf/blob/master/docs/guides/configuration.md#configuration-options
this table is very difficult to maintain, it is not easy to add new options and it is also not responsive.
## Wanted solution
- A table that is easy to maintain (to add new options)
- A table that can be responsive on mobile devices.
## Ideas
- Have all that data set in a YAML file or a simple python dictionary
- Then use that data to generate the table when the docs are generated
- Use some css/js skill to make searchable, collapsible, responsive
> **Any ot her ideas are welcome** | 0easy
|
Title: Compute metrics with selected threshold
Body: Based on discussion in https://github.com/mljar/mljar-supervised/discussions/418 it will be helpful to compute metrics with the same threshold value. | 0easy
|
Title: [helper_visitors] Write tests for vars_visitor.py
Body: It has the lowest test coverage out of every file.
Note: especially `visit_Subscript`, so I can improve the `get_call_names` documentation. | 0easy
|
Title: Manim current status and contribution guidelines
Body: Hello everyone, I will explain the current status of manim in this issue.
Now there are _three_ main manim versions, and their differences:
1. **The `master` branch of [3b1b/manim](https://github.com/3b1b/manim)**: Rendering on GPU using OpenGL and moderngl. Support interaction and have higher efficiency.
2. **[ManimCommunity/manim](https://github.com/ManimCommunity/manim)**: (@ManimCommunity is being improved and will be released on pypi) Using multiple backend rendering. There is better documentation and a more open contribution community.
3. **The [`cairo-backend`](https://github.com/3b1b/manim/tree/cairo-backend) branch of [3b1b/manim](https://github.com/3b1b/manim)**: Rendering on the CPU using cairo. Relatively stable, no longer be developed.
If you want to contribute to manim:
- **The master branch of [3b1b/manim](https://github.com/3b1b/manim)**: See https://3b1b.github.io/manim/development/contributing.html
- **[ManimCommunity/manim](https://github.com/ManimCommunity/manim)** accepts a wider range of new features and pull requests.
- **[manim-kindergarten/manim_sandbox](https://github.com/manim-kindergarten/manim_sandbox)** has a lot of new common classes in the `utils/` folder, and accepts pull requests.
Lastly, thank you very much for using manim and willing to contribute. Enjoy yourself ! 😄 | 0easy
|
Title: [ENH] `SignatureTransformer` and `SignatureClassifier` `numpy 2` compatibility
Body: `SignatureTransformer` and `SignatureClassifier` fail under `numpy 2`, this should be upgraded.
The failure is
```
> from . import _tosig as tosig
E ImportError: numpy.core.multiarray failed to import
``` | 0easy
|
Title: Remove scrapy.pipelines.media.MediaPipeline._make_compatible
Body: Added in 2.4.0 for backwards compatibility. | 0easy
|
Title: [Bug] Min new tokens error
Body: ### Checklist
- [x] 1. I have searched related issues but cannot get the expected help.
- [x] 2. The bug has not been fixed in the latest version.
- [x] 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
- [x] 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [x] 5. Please use English, otherwise it will be closed.
### Describe the bug
Running this file, /sglang/test/srt/sampling/penaltylib/penalizers/test_min_new_tokens.py
There is error. Please fix it.
### Reproduction
NA
### Environment
NA | 0easy
|
Title: BUG: to_numpy() ignores dtype argument for datetime64 column
Body: ### Modin version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the latest released version of Modin.
- [ ] I have confirmed this bug exists on the main branch of Modin. (In order to do this you can follow [this guide](https://modin.readthedocs.io/en/stable/getting_started/installation.html#installing-from-the-github-master-branch).)
### Reproducible Example
```python
import modin.pandas as pd
import pandas
import numpy as np
s = pd.Series(np.datetime64(3, 's'))
print(s.to_numpy(dtype='int64').dtype)
sp = pandas.Series(np.datetime64(3, 's'))
print(sp.to_numpy(dtype='int64').dtype)
```
### Issue Description
modin result is a datetime64[ns] array, while pandas result is an int64 array.
### Expected Behavior
Should match dtype in `dtype` argument, as pandas does.
### Error Logs
N/A
### Installed Versions
<details>
UserWarning: Setuptools is replacing distutils.
INSTALLED VERSIONS
------------------
commit : e12b21703494dcbb8f7aa951b40ebe1d033307ba
python : 3.9.18.final.0
python-bits : 64
OS : Darwin
OS-release : 23.0.0
Version : Darwin Kernel Version 23.0.0: Fri Sep 15 14:43:05 PDT 2023; root:xnu-10002.1.13~1/RELEASE_ARM64_T6020
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
Modin dependencies
------------------
modin : 0.25.0
ray : 2.8.0
dask : None
distributed : None
hdk : None
pandas dependencies
-------------------
pandas : 2.1.2
numpy : 1.26.1
pytz : 2023.3.post1
dateutil : 2.8.2
setuptools : 68.0.0
pip : 23.3
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : 8.17.2
pandas_datareader : None
bs4 : None
bottleneck : None
dataframe-api-compat: None
fastparquet : None
fsspec : 2023.10.0
gcsfs : None
matplotlib : 3.8.1
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 14.0.1
pyreadstat : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
zstandard : None
tzdata : 2023.3
qtpy : None
pyqt5 : None
</details>
| 0easy
|
Title: Deprecate starlite
Body: Now that we Litestar support I think we can deprecate Starlite 😊
```[tasklist]
### Tasks
- [ ] Add notice in the docs
- [ ] Use typing_extensions.deprecated to mark class as deprecated (see https://peps.python.org/pep-0702/)
- [ ] Trigger deprecation warning (with test), this might
``` | 0easy
|
Title: [BUG] Extended PageContent documentation wrong
Body: <!--
Please fill in each section below, otherwise, your issue will be closed.
This info allows django CMS maintainers to diagnose (and fix!) your issue
as quickly as possible.
-->
## Description
I am attempting to extend the `PageContent` model following the example provided in the official django CMS documentation: [Extending PageContent Model](https://docs.django-cms.org/en/latest/how_to/18-extending_page_contents.html#pagecontent-model-extension-example). While everything works as expected, I am facing an issue with retrieving the page content extension in a template.
## Steps to reproduce
Steps to reproduce the behavior:
1. Follow the steps to extend the `PageContent` model as per the documentation linked above.
2. Add custom fields to the `PageContent` extension model.
3. Try to render the extended content in a template using the provided example.
4. Notice that the extended content is not being retrieved or rendered as expected in the template.
## Expected behaviour
The extended content fields added to the `PageContent` model should be accessible in the template, allowing for proper rendering of the extended fields' data.
## Actual behaviour
While the `PageContent` model extension works correctly in the admin interface, the extended content is not being retrieved in the template. The extended fields do not display or render as expected.
## Screenshots
<!-- If applicable, please add screenshots to help explain your problem. -->
_No screenshots available for this issue._
## Additional information (CMS/Python/Django versions)
- Django CMS version: 4.1.2
- Python version: 3.11
- Django version: 4.2.8
- Environment: local development
- Logs: No relevant logs were generated during the issue.
## Do you want to help fix this issue?
* [ ] Yes, I want to help fix this issue and I will join the channel #pr-reviews on [the Discord Server](https://discord-pr-review-channel.django-cms.org) to confirm with the community that a PR is welcome.
* [x] No, I only want to report the issue.
| 0easy
|
Title: When view_closed event is unhandled, bolt logs out incorrect suggestion on how to handle the event.
Body: If you open a view in a bolt-py app, and set `notify_on_close=True` on this view, and have a view_submission event handler but _not_ a view_closed handler, Bolt will log out a warning and suggestion:
```
WARNING:slack_bolt.App:Unhandled request ({'type': 'view_closed', 'view': {'type': 'modal', 'callback_id': 'message_actions_modal'}})
---
[Suggestion] You can handle this type of event with the following listener function:
@app.view("message_actions_modal")
def handle_view_events(ack, body, logger):
ack()
logger.info(body)
```
The suggestion is incorrect to handle the view_closed event. The decorator is incorrect (it should be `@app.view_closed` instead of `@app.view`). | 0easy
|
Title: Export ONNX
Body: Thank you for this excellent work.
I don't want to be a killjoy, but an export option would be nice.
``` igel export ```
Can we integrate this library into the project:
http://onnx.ai/sklearn-onnx/index.html | 0easy
|
Title: Add automated integration, unit tests, code coverage
Body: | 0easy
|
Title: Marketplace - creator page - change creator name font
Body: ### Describe your issue.
Change font to Poppins.
Font name: Poppins
font size: 35
line-height: 40
Follow the h2 style in this typography guide: [https://www.figma.com/design/Ll8EOTAVIlNlbfOCqa1fG9/Agent-Store-V2?node-id=2759-9596&t=95EuovUwxruyYA6E-1](url)
<img width="582" alt="Screenshot 2024-12-16 at 16 59 28" src="https://github.com/user-attachments/assets/0121f6b3-0dcf-4373-a5a2-675b82877713" />
| 0easy
|
Title: [FEATURE] Run Python 3.9 builds
Body: - Add `39` to `envlist` in `tox.ini`
- Add `39` to `depends` of the `coverage-report` job in `tox.ini`
- Add `Programming Language :: Python :: 3.9` to the classifiers list in `setup.py`
- Add Python 3.9 job in `.github/workflows/build.yml` similarly to other builds | 0easy
|
Title: Implement GitLab 'List projects a user has contributed to'
Body: ## Description of the problem, including code/CLI snippet
This endpoint seems to be not implemented yet - at least, it is not documented:
https://docs.gitlab.com/ee/api/projects.html#list-projects-a-user-has-contributed-to
## Expected Behavior
I would love to see this endpoint implemented, as it makes filtering for contributed projects much easier.
## Specifications
- python-gitlab version: v5.6.0
- Gitlab server version (or gitlab.com): v17.9
| 0easy
|
Title: Improvements to K-Elbow Visualizer
Body: The following tasks from #91 are left over to finalize the implementation of the `KElbowVisualizer ` and improve it.
*Note to contributors: items in the below checklist don't need to be completed in a single PR; if you see one that catches your eye, feel to pick it off the list!*
- [ ] Add inertia metric as seen in [sklearn KMeans Equivalent of Elbow Method](https://stackoverflow.com/questions/41540751/sklearn-kmeans-equivalent-of-elbow-method)
- [x] Improve documentation with a description of how to use K-Elbow to select `k`
Stretch goals:
- [x] Find example dataset with clear elbow curve demonstration to use in documentation
- [ ] Parallelize with joblib and `n_jobs` argument (Note: this one is expert-level! See discussion [here](https://github.com/DistrictDataLabs/yellowbrick/pull/822#issuecomment-488258026) for more details)
The stretch goals can be converted to their own issues if not completed in this feature request. | 0easy
|
Title: Investigate Usage of sklearn VectorizerMixin with DispersionPlot Visualizer
Body: **Describe the solution you'd like**
It was proposed that we investigate the sklearn.feature_extraction.text. VectorizerMixin. This will make this DispersionPlot visualizer more like a Scikit-Learn text transformer, and will allow us to submit input as:
- list of words
- raw string that is tokenized using sklearn
- list of strings representing documents on disk
More information on the Mixin can be found here:
https://github.com/scikit-learn/scikit-learn/blob/a24c8b46/sklearn/feature_extraction/text.py#L98
| 0easy
|
Title: Support Path type for repo argument
Body: ## 🚀 Feature
Add support for Path type to the `repo` attribute of the `Run` class.
<!-- A clear and concise description of the feature proposal -->
### Motivation
Currently, the `Run.repo` attribute supports only the str type. The Path type from pathlib is very convenient for dealing with paths, and it would be great to use it in a code to specify the repo of Aim run as well. Passing a Path type to `Run.repo` will keep the user's script clean and precise.
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
### Pitch
A user should be able to pass a Path type to `Run.repo` argument.
Say we have the following simple script:
```
from pathlib import Path
from aim import Run
log_dir = Path("path/to/my/log/dir")
```
The current way of passing the repo path is:
```
run = Run(repo=str(log_dir))
```
Having such an API would make the code clean
```
run = Run(repo=log_dir)
```
<!-- A clear and concise description of what you want to happen. -->
| 0easy
|
Title: Add auto ML support
Body:
### Description
We want to add support for auto ML. My suggestion is to use autokeras. I'm letting this open for newcomers who want to contribute to this project.
The parameters of the model need to be read from a yaml file (check utils.py in igel, there is a helper function to read a yaml or json file). These parameters will be used to construct and train a model. The results should be then saved under the model_results folder.
Check the README and repo to know how igel works in order to keep the interface as clean as possible.
If you have any questions, check out the discussion for this issue.
| 0easy
|
Title: Upload coverage report from GPU tests and model tests
Body: Apparently this should just work, i.e. codecov will automatically handle merging the different reports. https://docs.codecov.io/docs/merging-reports | 0easy
|
Title: WebClient's paginated response iterator does not work for admin.conversations.search API
Body: Usually, `next_cursor` in an API response exists under `response_metadata` property. However, only admin.conversations.search API method's response has it at the top level: https://github.com/slackapi/java-slack-sdk/blob/main/slack-api-client/src/main/java/com/slack/api/methods/response/admin/conversations/AdminConversationsSearchResponse.java#L19
Due to this inconsistency, this part in this Python SDK does not work for the API. https://github.com/slackapi/python-slack-sdk/blob/v3.7.0/slack_sdk/web/slack_response.py#L141-L145 We can improve the internals of the class to support the API too.
* TODOs
* [ ] Add unit tests to reproduce the pattern
* [ ] Change the logic
### Category (place an `x` in each of the `[ ]`)
- [x] **slack_sdk.web.WebClient (sync/async)** (Web API client)
- [ ] **slack_sdk.webhook.WebhookClient (sync/async)** (Incoming Webhook, response_url sender)
- [ ] **slack_sdk.models** (UI component builders)
- [ ] **slack_sdk.oauth** (OAuth Flow Utilities)
- [ ] **slack_sdk.socket_mode** (Socket Mode client)
- [ ] **slack_sdk.audit_logs** (Audit Logs API client)
- [ ] **slack_sdk.scim** (SCIM API client)
- [ ] **slack_sdk.rtm** (RTM client)
- [ ] **slack_sdk.signature** (Request Signature Verifier)
### Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/python-slack-sdk/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| 0easy
|
Title: Add readme links
Body: Add links to the relevant sections: Jupyter, VSCode. An the soopervisor integrations | 0easy
|
Title: Add support for sublabel attributes as frame filter criteria in annotation view
Body: ### Actions before raising this issue
- [x] I searched the existing issues and did not find anything similar.
- [x] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Is your feature request related to a problem? Please describe.
1. Create a task with a skeleton label having several points, define an attribute for some points
2. Open a job
3. Try to create a frame filter with skeleton point attributes in the condition

<details>
Label definitions from the task
```json
[
{
"name": "test",
"id": 79,
"color": "#fb117d",
"type": "skeleton",
"sublabels": [
{
"name": "1",
"attributes": [
{
"name": "1.a1",
"mutable": false,
"input_type": "select",
"default_value": "a",
"values": [
"a",
"b"
],
"id": 65
}
],
"type": "points",
"color": "#d12345",
"id": 80
},
{
"name": "2",
"attributes": [
{
"name": "2.a1",
"mutable": true,
"input_type": "checkbox",
"default_value": "false",
"values": [
"false"
],
"id": 66
}
],
"type": "points",
"color": "#350dea",
"id": 81
}
],
"svg": "<line x1="35.7943229675293" y1="25.3843994140625" x2="45.1568717956543" y2="35.145355224609375" data-type="edge" data-node-from="1" data-node-to="2"></line>\n<circle r="0.75" cx="35.7943229675293" cy="25.3843994140625" data-type="element node" data-element-id="1" data-node-id="1" data-label-id="80"></circle>\n<circle r="0.75" cx="45.1568717956543" cy="35.145355224609375" data-type="element node" data-element-id="2" data-node-id="2" data-label-id="81"></circle>",
"attributes": []
}
]
```
</details>
### Describe the solution you'd like
There is already a filter for root label attributes. Sub-labels should also have an option to be used as a filter criteria.
### Describe alternatives you've considered
_No response_
### Additional context
_No response_ | 0easy
|
Title: Trio support
Body: So I admittedly don't know how much may be involved here, but my initial thoughts are that including support for ASGI applications using Trio wouldn't require too much to implement as an adapter here. | 0easy
|
Title: Send warning email when someone tries to login to your account
Body: Sending a warning email when someone tries to login to your account is a common feature of authentication systems. It would be great if we implement it here. | 0easy
|
Title: RoBERTa on SuperGLUE's 'Recognizing Textual Entailment' task
Body: RTE is one of the tasks of the [SuperGLUE](https://super.gluebenchmark.com) benchmark. The task is to re-trace the steps of Facebook's RoBERTa paper (https://arxiv.org/pdf/1907.11692.pdf) and build an AllenNLP config that reads the RTE data and fine-tunes a model on it. We expect scores in the range of their entry on the [SuperGLUE leaderboard](https://super.gluebenchmark.com/leaderboard).
We recommend you use the [AllenNLP Repository Template](https://github.com/allenai/allennlp-template-config-files) as a starting point. It might also be helpful to look at how the TransformerQA model works ([training config](https://github.com/allenai/allennlp-models/blob/main/training_config/rc/transformer_qa.jsonnet), [reader](https://github.com/allenai/allennlp-models/blob/main/allennlp_models/rc/dataset_readers/transformer_squad.py), [model](https://github.com/allenai/allennlp-models/blob/main/allennlp_models/rc/models/transformer_qa.py)). | 0easy
|
Title: Add strict mypy type annotations to the codebase
Body: We want all existing functions to have strict mypy type annotations.
Contributors: we could really use your help with this effort! Adding type annotations (that pass mypy with [--strict](https://mypy.readthedocs.io/en/stable/command_line.html#cmdoption-mypy-strict) flag) to every function in the codebase is a lot of work, but it will make our code much more robust and help avoid future bugs.
Just start with any one file and help us add type annotations (that pass mypy with [--strict](https://mypy.readthedocs.io/en/stable/command_line.html#cmdoption-mypy-strict) flag) for the functions in this file. You can make a PR that only covers one file which shouldn't take that long.
To get started, just run:
```
mypy --strict --install-types --non-interactive cleanlab
```
and see which files it complains about. You can add strict type annotations for any one these files. After you have saved some changes, you can run the above command again to see if mypy is now happy with the strict type annotations you have added.
Also check out:
https://github.com/cleanlab/cleanlab/issues/307
https://github.com/cleanlab/cleanlab/issues/351
https://github.com/cleanlab/cleanlab/issues/407
For help with getting numpy type annotations correct, look at this PR: https://github.com/cleanlab/cleanlab/pull/585
Another example PR that added good type annotations for one file: https://github.com/cleanlab/cleanlab/pull/317
You can similarly copy these patterns for other files. The rule of thumb here is that changes in the PR should have fewer `mypy --strict` errors on a per-file basis, without the total number of errors going up for the entire package. | 0easy
|
Title: [TVM Integration] Improve TVM Integration: Tutorial + Investigating Numerical Issue
Body: ## Description
TVM support has been added in https://github.com/dmlc/gluon-nlp/pull/1390. Also, we updated our benchmarking utility to support profiling the inference speed of TVM: https://github.com/dmlc/gluon-nlp/tree/master/scripts/benchmarks. We can further improve our document.
- [ ] Attach the benchmark numbers. (Good for beginners)
- [ ] Add a tutorial about how to convert GluonNLP models to TVM and do inference. (Good for beginners)
- [ ] Investigate the numerical issues triggered when converting ALBERT to TVM. Currently, the test may still sometimes fail even if `atol=1E-1`, `rtol=1E-3`.
@dmlc/gluon-nlp-committers
| 0easy
|
Title: [FR] Delivery Address on Purchase Orders
Body: ### Please verify that this feature request has NOT been suggested before.
- [x] I checked and didn't find a similar feature request
### Problem statement
It would be great to allow a 'Delivery Address' field for purchase orders.
This field would allow you to then select any companies address and could then be used to generate a delivery address in the purchase order report.
As discussed here https://github.com/inventree/InvenTree/discussions/9219
### Suggested solution
The existing address field in POs should be renamed to something like 'Suppliers Address' and the new field should be something like 'Delivery Address'.
### Describe alternatives you've considered
I'm currently getting around this by adding a 'shipping address' to the suppliers addresses list. I then select this suppliers 'shipping address' in the purchase order 'address' field drop down.
### Examples of other systems
_No response_
### Do you want to develop this?
- [ ] I want to develop this. | 0easy
|
Title: `uvicorn.workers` is deprecated
Body: According to https://github.com/encode/uvicorn/pull/2302 and https://github.com/cookiecutter/cookiecutter-django/pull/5101 `uvicorn.workers` is deprecated.
It should be rather an easy task to update to `uvicorn-worker`. | 0easy
|
Title: Add static type annotations
Body: I am having trouble with using Dask in a project that uses PyRight because Dask does not use static type annotations/
Take, `da.from_array`, for example:
https://github.com/dask/dask/blob/b2ec1e1a2361db1e269ba86c7c59d3f6df70a02d/dask/array/core.py#L3307-L3317
Or `array.rechunk`:
https://github.com/dask/dask/blob/b2ec1e1a2361db1e269ba86c7c59d3f6df70a02d/dask/array/rechunk.py#L268-L275
PyRight complains if you use anything but a `str` for `chunks`:

```py
Argument of type "tuple[Literal[1], Literal[-1], Literal[-1]]" cannot be assigned to parameter "chunks" of type "str" in function "from_array"
"tuple[Literal[1], Literal[-1], Literal[-1]]" is incompatible with "str"
```
This happens because in `from_array` and `rechunk`, `chunks="auto"` infers to a `str` type.
Is there an awareness of this problem? Is there a known solution? If not, is there a plan to fix this? | 0easy
|
Title: Replace Weasyprint with PDFkit
Body: https://github.com/JazzCore/python-pdfkit
Renders better
We need to create an option `?clean=true` which will render pure content without js and css transformations to fit in PDF pages. Also an url `url_for('export_to_pdf', 'article-long-slug')` that will call a localhost url and returns the html file to the browser
| 0easy
|
Title: RoBERTa on SuperGLUE's CommitmentBank task
Body: CommitmentBank is one of the tasks of the [SuperGLUE](https://super.gluebenchmark.com) benchmark. The task is to re-trace the steps of Facebook's RoBERTa paper (https://arxiv.org/pdf/1907.11692.pdf) and build an AllenNLP config that reads the CommitmentBank data and fine-tunes a model on it. We expect scores in the range of their entry on the [SuperGLUE leaderboard](https://super.gluebenchmark.com/leaderboard).
This can be formulated as a classification task, using the [`TransformerClassificationTT`](https://github.com/allenai/allennlp-models/blob/Imdb/allennlp_models/classification/models/transformer_classification_tt.py) model, analogous to the IMDB model. You can start with the [experiment config](https://github.com/allenai/allennlp-models/blob/Imdb/training_config/tango/imdb.jsonnet) and [dataset reading step](https://github.com/allenai/allennlp-models/blob/Imdb/allennlp_models/classification/tango/imdb.py#L13) from IMDB, and adapt them to your needs. | 0easy
|
Title: Marks in Range Slider has height equal to 0
Body: And, therefore, are hidden.
I have a range slider followed by an array of plotly graphs. I've set some marks for the slider but they aren't visible. Apparently, they are inside a div with class "rc-slider-mark" and that div has 0 height and is absolute positioned, so `overflow: overlay` doesn't help either. This is what I see:

I worked around this by setting a fixed height in a div that wraps the RangeSlider component. The height had to be bigger enough to contain the slider plus the marks that were not being shown, this is what I get:

However, messing up with fixed heights and absolute positions doesn't look very good to me. I think there should be a better css implementation that maintains the needed height for the marks, while at the same time displays the marks exactly wherever they have to be.
The whole live example is available [here](http://wikichron.science/). | 0easy
|
Title: 如何将md当做ipynb打开?
Body: | 0easy
|
Title: Allow to add title
Body: It would be nice to be able to add a plot title on top of all the annotations. Something like:
`statannotations.Annotator.Annotator.configure(title='foobar',...)`
I guess the function would somehow have to take into account if annotations are to the left and the right (in this case the top is free and the title can be simpled added by using `ax.set_tile('foobar')`). If the annotations are on the top then the function would have to check where the upper end of the annotations lies and put the title there. | 0easy
|
Title: Add `__slots__` definitions
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
We currently don't define the `__slots__` attribute for any of our classes. Adding it is a free performance improvement.
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
We should add `__slots__` definitions to as many classes as possible. https://github.com/ariebovenberg/slotscheck/ should then be included in tests to ensure we are defining the slots correctly.
Non-exhaustive list:
- `client.Prisma`
- `actions.*Actions`
- `http_abstract.HTTP`
- `http_abstract.Response`
I am unsure if we can/should add slots to model classes.
| 0easy
|
Title: Mongo Support
Body: Probably use ODMantic: https://github.com/art049/odmantic
How would you test the implementation? mock a mongo db? | 0easy
|
Title: Deprecate `testtools` support
Body: At the time of writing, our documentation might [give an impression](https://github.com/falconry/falcon/discussions/2155) that using `testtools` is the preferred way to write tests for a Falcon app.
I haven't done any research or survey in the area, but just subjectively it feels that the most popular testing frameworks in 2023 are `pytest` followed by the stdlib's `unittest` (both are also well supported by Falcon).
Moreover, IIRC I had to exclude `testtools` from our test dependencies when adding Python 3.10 CI gates, since it wasn't 3.10-ready by the time. So it's becoming a maintenance burden, too.
OTOH, it seems that there has been a new [`testtools` release](https://pypi.org/project/testtools/), and the project is still maintained. So after all maybe it doesn't hurt to keep the integration?
Even if we decide to keep `testtools`, we should IMHO reword the docs not to give the impression it is the preferred way, and link to our tutorial for using `pytest` in the same paragraph. | 0easy
|
Title: Contributor Location metric API
Body: The canonical definition is here: https://chaoss.community/?p=3468 | 0easy
|
Title: [Feature] New models Gemma 3
Body: ### Checklist
- [ ] 1. If the issue you raised is not a feature but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [x] 2. Please use English, otherwise it will be closed.
### Motivation
Gemma 3 has a large, 128K context window, multilingual support in over 140 languages, and is available in more sizes than previous versions. Gemma 3 models are well-suited for a variety of text generation and image understanding tasks, including question answering, summarization, and reasoning.
Inputs and outputs
Input:
Text string, such as a question, a prompt, or a document to be summarized
Images, normalized to 896 x 896 resolution and encoded to 256 tokens each
Total input context of 128K tokens for the 4B, 12B, and 27B sizes, and 32K tokens for the 1B size
Output:
Generated text in response to the input, such as an answer to a question, analysis of image content, or a summary of a document
Total output context of 8192 tokens
### Related resources
https://huggingface.co/collections/google/gemma-3-release-67c6c6f89c4f76621268bb6d
https://storage.googleapis.com/deepmind-media/gemma/Gemma3Report.pdf | 0easy
|
Title: [Feature] update sgl-kernel 3rdparty flashinfer to latest main
Body: ### Checklist
- [ ] 1. If the issue you raised is not a feature but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [ ] 2. Please use English, otherwise it will be closed.
### Motivation
fix the compile issue
### Related resources
_No response_ | 0easy
|
Title: Flag needed to disable file output
Body: ### Discussed in https://github.com/sherlock-project/sherlock/discussions/2247
___
<div type='discussions-op-text'>
<sup>Originally posted by **ovalofficer** August 7, 2024</sup>
Hi, I'm making a web wrapper for Sherlock but I want to disable the file output. Is there a switch I can use to turn it off without editing Sherlock's source?
I couldn't find out how to do it. Sorry if it's something obvious</div>
___
This flag is to be a temporary feature, becoming the default in 0.16.0. | 0easy
|
Title: improve and extend the docs & Readme
Body: ### Description
This should be a good first issue for new comers who want to join the project. Improve the readme and docs if you see any improvement possibility.
extend igel's docs by providing a full description about the key value pairs that can be set in the yaml file.
example:
- scaling values: user can use normalization or standardization. Both methods should be explained clearly.
Furthermore, it would be nice to have a copy to clipboard button for the terminal commands in the docs. This would be awesome
| 0easy
|
Title: Decay Function Showing Incorrect Results Where Length Greater Than 1
Body: **Which version are you running? The lastest version is on Github. Pip is for major releases.**
3.99b0
**Do you have _TA Lib_ also installed in your environment?**
Yes
**Have you tried the _development_ version? Did it resolve the issue?**
Tested in development version.
**Describe the bug**
According to the documentation provided at https://tulipindicators.org/decay if length is 4 and signals are:
[0, 0, 0, 1, 0, 0, 0, 1]
the output of the decay indicator should be:
[0, 0, 0, 1, .75, 0.5, 0.25, 1]
instead the output is:
[0, 0, 0, 1, .75, 0, 0, 1]
**Expected behavior**
Decay should work for lengths greater than 1 as described in documentation.
Thanks for using Pandas TA!
| 0easy
|
Title: Allow serialization of a subset of params
Body: We store `params` for each task execution so we re-run the task if any of the parameters change. But since the metadata is a JSON file, we can only store JSON-serializable parameters. The current implementations do all or nothing: if any parameter is not JSON-serializable, it doesn't save anything but it would be better to only ignore the ones we cannot serialize and save the rest
https://github.com/ploomber/ploomber/blob/aca1a8a9ee48a4b8f671f1b6084fb9647caa45e0/src/ploomber/products/metadata.py#L252 | 0easy
|
Title: [FR] Add version of `MixtureSameFamily` from Pyro
Body: I'm finally following up on @fehiepsi 's suggestion that I make this feature request following our conversation in this [issue](https://github.com/arviz-devs/arviz/issues/1661) in the Arviz repo. __But looking over the docs again, it looks like this may have already been implemented with a Tensorflow distribution that may make this FR unnecessary.__ Please feel free to close if that's the case!
In case it's useful anyway, here's what I had written before noticing that in the Tensorflow distributions api:
It would be very useful to me and I imagine anyone else working with finite mixture models to have an implementation of a distribution like Pyro's [MixtureSameFamily](https://docs.pyro.ai/en/dev/distributions.html?highlight=MixtureSameFamily#pyro.distributions.MixtureSameFamily). Numpyro's ability to enumerate over and otherwise manage discrete variables is awesome, but for many finite mixture models it can also be useful to explicitly marginalize out discrete latent variables (e.g. see this example in the [Stan manual](https://mc-stan.org/docs/2_24/stan-users-guide/summing-out-the-responsibility-parameter.html). This is required for traditional implementations of HMC, but it seems like it would be useful even with Numpyro's additional capabilities both for speed and cases where the number of latent mixtures is large.
| 0easy
|
Title: cell_id param missing from `run_cell` methods
Body: The `run_cell()` method is missing doc-str for the param `cell_id`.
I was trying to figure out if this param would help me control the output number in cell display but there is no doc on it so I have no idea what it does. | 0easy
|
Title: X-Client-Info header
Body: we're rolling out a header similar to Stripe's App-Data in all of the client libs so issues can be debugged more easily between the client and the backend
the javascript libs are done already: https://github.com/supabase/supabase-js/pull/238
the format is: `X-Client-Info: supabase-js/1.11.0`
for client libs that wrap others (like how supabase-js wraps gotrue-js) we allow the wrapper lib to **overwrite** the wrapped lib's header (given that we can infer the gotrue-js version based on which supabase-js was used to make the call)
any help with rolling this out here would be incredible | 0easy
|
Title: Suggestion! Maybe you can list the basic hardware requirements of this project.
Body: Just as the title. | 0easy
|
Title: New dataframe transform: Unique
Body: ### Description
I would like to be able to get the unique values of a column
### Suggested solution
Using the distinct operator in sql or the unique operators in both polars and pandas should do the trick
### Alternative
_No response_
### Additional context
_No response_ | 0easy
|
Title: Pass file path or function directly to `st.navigation`, without using `st.Page`
Body: ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [X] I added a descriptive title and summary to this issue.
### Summary
When I write small test apps with multiple pages, I often find myself wanting to do something like this:
```python
st.navigation(["page1.py", "page2.py"])
# Or with functions:
st.navigation([page1_func, page2_func])
```
While this is totally plausible and I don't care about customizing the title or icon for these pages, Streamlit always forces me to wrap things in `st.Page`:
```python
st.navigation([st.Page("page1.py"), st.Page("page2.py")]
# Or with functions:
st.navigation([st.Page(page1_func), st.Page(page2_func)])
```
It would be great if we'd support the former!
### Why?
Not a huge pain point but I regularly stumble over it. Especially because we show a pretty generic exception in this case and I'm confused why it doesn't just work.
### How?
_No response_
### Additional Context
_No response_ | 0easy
|
Title: Automate stable release workflow
Body: **Description of the issue**
The release process for stable cirq verions is currently a manual process documented in https://github.com/quantumlib/Cirq/blob/67090465ba60a1ba1fdb8926defa3a465d52cc22/release.md. This documentation is out of date because of changes to the underlying tools.
The release process doesn't have to be manual. we already automate the prelease process which packages and uploads cirq to pypi after each PR merge https://github.com/quantumlib/Cirq/blob/main/.github/workflows/release-main.yml.
A similar workflow should be created that gets triggered by a github realease ... that's the process for creating a stable release shoudl be just a two step operation: 1. update the cirq version 2. create a github release (Which triggers the realease workflow) which is what we do for other quantumlib libraries like [QSim](https://github.com/quantumlib/qsim) https://github.com/quantumlib/qsim/blob/master/.github/workflows/release_wheels.yml | 0easy
|
Title: [BUG] Instructions in prompt bleed into personality description
Body: When asking the bot to tell something about itself, I got the following response:
> Sure thing! I'm a software engineer, and I love talking about random topics and exploring niche interests. I'm also a helpful and descriptive person, and I always try to make well-informed decisions. I'm also mindful of conversation history and try to be consistent with my answers. Oh, and I'm always up for chatting about coding, too – I'm good at providing code examples and explanations! 🤓
Half of the mentioned attributes are instructions to the bot, and not really descriptions of its personality. Perhaps the prompt should instruct GPT to act as a character with certain characteristics instead of simply defining all those characteristics with "you are ...", which is the same format used to define its behaviour. I will play a bit with the prompt tomorrow to come up with a suggestion. | 0easy
|
Title: Clarify intended behavior of model.backtest and model.historical_forecast with multiple series and allow training on multiple series per fold when backtesting
Body: **Is your feature request related to a current problem? Please describe.**
It is unclear from the documentation that `model.historical_forecasts` and `model.backtest` will only ever train on a single series at a time if multiple series are provided, in contrast to `model.fit` which will train on every series provided.
**Describe proposed solution**
Add a Boolean keyword argument to `model.backtest` and `model.historical_forecast` to train on all "past" samples in each series for each fold of backtesting.
**Describe potential alternatives**
Clarify the documentation and possibly provide an example of how to achieve the proposed functionality. | 0easy
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.