text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: Update pytorch lightning import to work with Ray and other libraries.
Body: **Is your feature request related to a current problem? Please describe.**
Importing and using `lightning.pytorch` and `pytorch_lightning` in the same program causes conflicts. Ray has migrated their tuning to `ray.tune.Tuner` and is deprecating `ray.tune.run`. Ray uses a conditional import, importing `pytorch_lightning` only if the other is not available. However, it will import `lightning.pytorch` first, where the darts `pl_forecasting_module` imports pytorch_lightning. This may also apply to other places.
**Describe proposed solution**
Darts should incorporate a conditional import that matches this priority in `pl_forecasting_module`, `lr_finder`, etc as follows:
```python
#import pytorch_lightning as pl
def import_lightning(): # noqa: F402
try:
import lightning.pytorch as pl
except ModuleNotFoundError:
import pytorch_lightning as pl
return pl
pl = import_lightning()
```
Note that files may import more than just `pl` - those other imports could also be wrapped under the conditional.
**Describe potential alternatives**
Change imports of pytorch_lightning to lightning.pytorch
**Additional context**
I noticed this issue when attempting to use `ray.tune.Tuner` and `ray.train.torch.TorchTrainer` in conjunction with `RNNModel`. I solved it by making the change in the proposed solution.
| 0easy
|
Title: Improve error message for unsupported architectures
Body: ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
If the user attempts to use the CLI on an unsupported architecture then an obscure error message is shown:
```
$ prisma db push
...
OSError: [Errno 8] Exec format error: '/tmp/prisma/binaries/engines/1c9fdaa9e2319b814822d6dbfd0a69e1fcc13a85/prisma-cli-linux'
```
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
If this error occurs we should point the user to documentation describing how to build the CLI and the engine binaries themselves.
## Additional context
<!-- Add any other context or screenshots about the feature request here. -->
Original error reported in #195
## Implementation
Modify `src/prisma/cli/prisma.py` to capture exec format errors and report an error that the current platform / architecture is unsupported. | 0easy
|
Title: [ENH] Smoothing filters as BaseSeriesTransformers
Body: ### Describe the feature or idea you want to propose
A while back (five years?) we compared different smoothing algorithms for time series classification.
https://link.springer.com/chapter/10.1007/978-3-030-29859-3_5
Now, we found it did not make much difference on the UCR data, but that does not mean it is not of use. It would be good to implement some or all of these as BaseSeriesTransformers, most are pretty simple.
### Describe your proposed solution
we looked at the following, but there are probably many more. These are all parameterised, so it would be good to have an automatic way of
- [x] Simple Moving Average (good first issue)
- [x] Exponential smoothing (good first issue)
- [x] Gaussian filter (good first issue)
- [x] Savitzky-Golay filter
- [x] Discrete Fourier Approximation #1967 @Cyril-Meyer
- [x] Recursive Median Sieve
### Describe alternatives you've considered, if relevant
list any others below
### Additional context
_No response_ | 0easy
|
Title: [Feature request] Add apply_to_images to RandomSunFlare
Body: | 0easy
|
Title: Add Docs entry for new CSRF protections
Body: See #103 for context. | 0easy
|
Title: Document excluding `/` as an example of `excluded_urls`
Body: https://pydanticlogfire.slack.com/archives/C06EDRBSAH3/p1722462772309099 | 0easy
|
Title: [GCP] Fail to terminate CPU instances when MIG configuration is set
Body: To reproduce:
1. `export SKYPILOT_CONFIG=tests/test_yamls/use_mig_config.yaml`
```
managed_instance_group:
run_duration: 30
provision_timeout: 600
```
3. `sky launch -c mig-down --cloud gcp --cpus 2 echo hi`
4. `sky down mig-down`
```
File "/opt/conda/envs/sky-oss/lib/python3.10/site-packages/googleapiclient/http.py", line 938, in execute
raise HttpError(resp, content, uri=self.uri)
googleapiclient.errors.HttpError: <HttpError 403 when requesting https://compute.googleapis.com/compute/beta/projects/sky-dev-465/zones/us-central1-a/instanceGroupManagers/sky-mig-mig-down-11d9/resizeRequests?filter=state+eq+ACCEPTED&alt=json returned "Required 'compute.instanceGroupManagers.get' permission for 'projects/sky-dev-465/zones/us-central1-a/instanceGroupManagers/sky-mig-mig-down-11d9'". Details: "[{'message': "Required 'compute.instanceGroupManagers.get' permission for 'projects/sky-dev-465/zones/us-central1-a/instanceGroupManagers/sky-mig-mig-down-11d9'", 'domain': 'global', 'reason': 'forbidden'}]">
``` | 0easy
|
Title: Problems with recommentation to use `$var` syntax if expression evaluation fails
Body: A common error when evaluating expressions with IF or otherwise is using something like
```
IF ${x} == 'expected'
Keyword
END
```
The resulting error message was enhanced in RF 6.1 (#4676) to recommend using the `$x` syntax, and what you get nowadays is something like this:
> Invalid IF condition: Evaluating expression "xxx == 'expected'" failed: NameError: name 'xxx' is not defined nor importable as module
>
> Variables in the original expression "${x} == 'expected'" were resolved before the expression was evaluated. Try using "$x == 'expected'" syntax to avoid that. See Evaluating Expressions appendix in Robot Framework User Guide for more details.
This works fine otherwise, but if the variable has been quoted and evaluation fails because the variable contains a matching quote or a newline. For example, if `${x}` contains `x'x` and you use
```
IF '${x}' == 'expected'
Keyword
END
```
the recommendation part of the error is this:
> Try using "'$x' == 'expected'" syntax to avoid that.
The above recommendation isn't correct, because `'$x'` would be treated as a literal string where `$x` has no special meaning. The correct recommendation would be this one without quotes around `$x`:
> Try using "$x == 'expected'" syntax to avoid that.
Although this is only a recommendation in an error situation, the recommendation being wrong is somewhat severe. Luckily it is pretty easy to detect is the original variable quoted and remove quotes in that case. | 0easy
|
Title: RoBERTa on SuperGLUE's 'Winogender Schema Diagnostics' task
Body: Winogender Schema Diagnostics (AX-g) is one of the tasks of the [SuperGLUE](https://super.gluebenchmark.com) benchmark. The task is to re-trace the steps of Facebook's RoBERTa paper (https://arxiv.org/pdf/1907.11692.pdf) and build an AllenNLP config that reads the AX-g data and fine-tunes a model on it. We expect scores in the range of their entry on the [SuperGLUE leaderboard](https://super.gluebenchmark.com/leaderboard).
This can be formulated as a classification task, using the [`TransformerClassificationTT`](https://github.com/allenai/allennlp-models/blob/Imdb/allennlp_models/classification/models/transformer_classification_tt.py) model, analogous to the IMDB model. You can start with the [experiment config](https://github.com/allenai/allennlp-models/blob/Imdb/training_config/tango/imdb.jsonnet) and [dataset reading step](https://github.com/allenai/allennlp-models/blob/Imdb/allennlp_models/classification/tango/imdb.py#L13) from IMDB, and adapt them to your needs. | 0easy
|
Title: Add new exchange Binance.US
Body: Binance just annouced Binance.us for registration and will be trading within 30 days or so. I suspect the endpoints for Binance.us may end up being different than Binance proper. But maybe not... | 0easy
|
Title: Migrate to react-charts
Body: **Describe the enhancement you'd like**
Migrate to https://github.com/TanStack/react-charts because our old framework doesn't support TypeScript.
**Describe why this will benefit the LibrePhotos**
Better maintainability.
| 0easy
|
Title: Twitter+Authlib+FastAPI works different than the docs suggest
Body: **Describe the bug**
The docs describe a process for integrating Authlib, Starlette, and Twitter into an app that needs to be revised. I followed [these docs](https://docs.authlib.org/en/latest/client/starlette.html) to add Twitter OAuth 1 to my app, but got the following errors.
The example in the docs should also be edited to create a FastAPI instance instead of Starlette instance, because `@app.get("/login")` decorators don't exist in Starlette. I can start a PR for the docs edit if you agree.
**Error Stacks**
```
File "/home/firescar96/.local/lib/python3.7/site-packages/starlette/routing.py", line 41, in app
response = await func(request)
File "/home/firescar96/.local/lib/python3.7/site-packages/fastapi/routing.py", line 133, in app
raw_response = await dependant.call(**values)
File "./broken.py", line 31, in auth_via_google
token = await oauth.twitter.authorize_access_token(request)
File "/home/firescar96/.local/lib/python3.7/site-packages/authlib/integrations/starlette_client/integration.py", line 58, in authorize_access_token
return await self.fetch_access_token(**params)
File "/home/firescar96/.local/lib/python3.7/site-packages/authlib/integrations/base_client/async_app.py", line 97, in fetch_access_token
token = await client.fetch_access_token(token_endpoint, **kwargs)
File "/home/firescar96/.local/lib/python3.7/site-packages/authlib/integrations/httpx_client/oauth1_client.py", line 61, in fetch_access_token
self.handle_error('missing_verifier', 'Missing "verifier" value')
File "/home/firescar96/.local/lib/python3.7/site-packages/authlib/integrations/httpx_client/oauth1_client.py", line 75, in handle_error
raise OAuthError(error_type, error_description)
authlib.common.errors.AuthlibBaseError: missing_verifier: Missing "verifier" value
```
**To Reproduce**
```python
import os
from starlette.requests import Request
from fastapi import FastAPI
from starlette.middleware.sessions import SessionMiddleware
from ma_http_interface import config
from authlib.integrations.starlette_client import OAuth
app = FastAPI()
app.add_middleware(SessionMiddleware, secret_key="some-random-string")
oauth = OAuth()
oauth.register(
name='twitter',
request_token_url='https://api.twitter.com/oauth/request_token',
access_token_url='https://api.twitter.com/oauth/access_token',
authorize_url='https://api.twitter.com/oauth/authenticate',
api_base_url='https://api.twitter.com/1.1/',
access_token_params={},
authorize_params={},
client_id=os.environ['API_KEY'],
client_secret=os.environ['API_SECRET']
)
@app.get("/login")
async def login_via_google(request: Request):
redirect_uri = 'https://example.com/auth'
return await oauth.twitter.authorize_redirect(request, redirect_uri)
@app.get("/auth")
async def auth_via_google(request: Request):
token = await oauth.twitter.authorize_access_token(request)
user = await oauth.twitter.parse_id_token(request, token)
return dict(user)
```
**Expected behavior**
I added the `/auth` route to my twitter app callback routes. I expected the call to not error but it did. After reading the authlib code I found adding this line at the beginning of the `/auth` route resolved the error.
```python
oauth.twitter.access_token_params['verifier'] = request.query_params.get('oauth_verifier', '')
```
Do you think is because twitter passes the verifier query parameter in a different place than google does?
**Environment:**
- OS: Debian 10
- Python Version: 3.7.3
- Authlib==0.14.1
- httpx==0.11.1
- starlette==0.12.9
- fastapi==0.48.0
| 0easy
|
Title: Enable paths for matchers to declare all elements of nested list
Body: **Is your feature request related to a problem? Please describe.**
I want to use Syrupy when testing API response bodies. I have nested lists of objects inside my payload. There are values that are returned as part of the nested objects that are not static (eg, created dates, uuids, etc). I am using the `path_type` matcher to access the nth element of an object, eg `"data.0.dateCreated": (str,)`. I would like to do some tests with larger amounts of elements in the nested list. I cannot see a way to replace the previous with something that operates on _all_ elements of the list.
**Describe the solution you'd like**
It would be nice to be able to do some thing like `"data.*.dateCreated": (str,)`. I would expect this to operate on all the elements of the list and replace the date created with the `<class str>` value.
**Describe alternatives you've considered**
The way I can see to do this right now is to write it out by hand, which is not particularly fun and prone to error. I could not see anything obvious that would let me do what I am looking for.
**Additional context**
I was just looking for a way to open an issue to ask if this was doable, but there was no question option. So I made it a feature
request. If this is something that is doable then I have totally missed where it is described in the docs. | 0easy
|
Title: I/O Operation on Closed File
Body: ```
File "/usr/local/lib/python3.9/dist-packages/discord/http.py", line 371, in request
raise HTTPException(response, data)
discord.errors.HTTPException: 400 Bad Request (error code: 50024): Cannot execute action on this channel type
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/discord/client.py", line 378, in _run_event
await coro(*args, **kwargs)
File "/home/gptbot/gpt3discord.py", line 120, in on_application_command_error
raise error
File "/usr/local/lib/python3.9/dist-packages/discord/bot.py", line 1114, in invoke_application_command
await ctx.command.invoke(ctx)
File "/usr/local/lib/python3.9/dist-packages/discord/commands/core.py", line 375, in invoke
await injected(ctx)
File "/usr/local/lib/python3.9/dist-packages/discord/commands/core.py", line 124, in wrapped
ret = await coro(arg)
File "/usr/local/lib/python3.9/dist-packages/discord/commands/core.py", line 1312, in _invoke
await command.invoke(ctx)
File "/usr/local/lib/python3.9/dist-packages/discord/commands/core.py", line 375, in invoke
await injected(ctx)
File "/usr/local/lib/python3.9/dist-packages/discord/commands/core.py", line 132, in wrapped
raise ApplicationCommandInvokeError(exc) from exc
discord.errors.ApplicationCommandInvokeError: Application Command raised an exception: HTTPException: 400 Bad Request (error code: 50024): Cannot execute action on this channel type
INFO:backoff:Backing off send_chatgpt_chat_request(...) for 0.2s (ValueError: I/O operation on closed file)
```
Need to find out how to exactly recreate this and then place a traceback.print_exc() in send_chatgpt_chat_request, and then the issue should be very trivial to solve | 0easy
|
Title: Arguments `lr_decay` and `lr_decay_steps` are not being used in MLP Model
Body: ## 🐛 Bug Description
Under [MLP](https://github.com/microsoft/qlib/blob/main/qlib/contrib/model/pytorch_nn.py) implementation there are several variables that need to be initialized. Such as `loss`, `lr`, `lr_decay`, `lr_decay_steps`, `optimizer`. However, it seems that the variables `lr_decay` and`lr_decay_steps` are indeed being initialized but are not being used at any point in the code.
## To Reproduce
Steps to reproduce the behavior:
1. Add under the file [workflow_config_mlp_Alpha360](https://github.com/microsoft/qlib/blob/main/examples/benchmarks/MLP/workflow_config_mlp_Alpha360.yaml) some not expected value into the variables `lr_decay` and`lr_decay_steps` such as `abc` and `efg`, respectively. Such as the example below
```
model:
class: DNNModelPytorch
module_path: qlib.contrib.model.pytorch_nn
kwargs:
loss: mse
lr: 0.002
lr_decay: abc
lr_decay_steps: def
optimizer: adam
max_steps: 8000
batch_size: 4096
GPU: 0
pt_model_kwargs:
input_dim: 360
```
2. Execute `qrun examples/benchmarks/MLP/workflow_config_mlp_Alpha360_br.yaml`
3. Then, it should execute successfully, which is an akward behavior since the variables `lr_decay` and`lr_decay_steps` should be numbers. **NOTE**, the problem is not with the fact that such variables are not being checked to validate its types before initilization, but with the fact that such variables are not being referenced at any time in the code.
## Expected Behavior
If the variables `lr_decay` and`lr_decay_steps` were being used and the user passed invalid values it should throw an error.
## Screenshot
No need.
## Environment
**Note**: User could run `cd scripts && python collect_info.py all` under project directory to get system information
and paste them here directly.
```
Linux
x86_64
Linux-5.13.0-39-generic-x86_64-with-glibc2.17
#44~20.04.1-Ubuntu SMP Thu Mar 24 16:43:35 UTC 2022
Python version: 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0]
Qlib version: 0.8.4.99
numpy==1.22.3
pandas==1.4.2
scipy==1.8.0
requests==2.27.1
sacred==0.8.2
python-socketio==5.5.2
redis==4.2.2
python-redis-lock==3.7.0
schedule==1.1.0
cvxpy==1.2.0
hyperopt==0.1.2
fire==0.4.0
statsmodels==0.13.2
xlrd==2.0.1
plotly==5.7.0
matplotlib==3.5.1
tables==3.7.0
pyyaml==6.0
mlflow==1.24.0
tqdm==4.64.0
loguru==0.6.0
lightgbm==3.3.2
tornado==6.1
joblib==1.1.0
fire==0.4.0
ruamel.yaml==0.17.21
```
## Additional Notes
No need.
| 0easy
|
Title: delete integration tests
Body: we can delete them from the CI | 0easy
|
Title: api: DNSSEC keys missing when deleting and quickly recreating a domain
Body: Need to investigate whether this is a pdns bug or something caused by our side. | 0easy
|
Title: Action buttons are easy to miss in list views for high contrast mode users
Body: ### Issue Summary
Reported by partially sighted users and then confirmed:
> The “…” buttons were completely missed by some users who were not aware that these page controls were available in the list view
> […]
> Three dot menus were also not always visible under high contrast.
Here is how those buttons currently appear in a forced-colors "dark" theme:

### Proposed solution
A simple improvement would be to add a border around those buttons in forced colors mode, similarly to how we adapt the "Add" button. It seems like this could be done for all `w-dropdown__toggle`.
### Steps to Reproduce
1. Enable forced-colors mode
2. Go to a view with a listing of items such as the dashboard panels or Pages section
### Working on this
<!--
Do you have thoughts on skills needed?
Are you keen to work on this yourself once the issue has been accepted?
Please let us know here.
-->
Anyone can contribute to this. View our [contributing guidelines](https://docs.wagtail.org/en/latest/contributing/index.html), add a comment to the issue once you’re ready to start.
| 0easy
|
Title: Implement `do_ping` in GSheets adapter
Body: Currently we have a dialect-level `do_ping` method that simply returns `True`:
https://github.com/betodealmeida/shillelagh/blob/b4d71980f0b87040e8a2d0276b62a9c9becdf783/src/shillelagh/backends/apsw/dialects/base.py#L79
We could implement the method at the dialect level. For example, GSheets could try to connect and return `False` if for some reason the API is down. | 0easy
|
Title: gotify integration should allow to set priorities
Body: Different priorities allow different [level of notifications](https://github.com/gotify/android?tab=readme-ov-file#message-priorities) within the application.
We should be able to set priority of integration for when a check goes down or up.
| 0easy
|
Title: Draw the projection of the center of gravity on the ground plane
Body: You'd need to update the following
**1. The model `VirtualHexapod` to have an attribute that stores this point**
https://github.com/mithi/hexapod-robot-simulator/blob/531fbb34d44246de3a0116def7e3d365de25b9f6/hexapod/models.py#L95
You can compute this point as outlined by the algorithm here, you should inject the point calculation to a different function and update the attribute of the `VirtualHexapod` when this is called in the two `compute_orientation_properties` of the `hexapod.ground_contact_solver` module.
https://github.com/mithi/hexapod-robot-simulator/blob/531fbb34d44246de3a0116def7e3d365de25b9f6/hexapod/ground_contact_solver/helpers.py#L14
**2. In figure template**, you should append a new point trace at the end of the `data` list
https://github.com/mithi/hexapod-robot-simulator/blob/531fbb34d44246de3a0116def7e3d365de25b9f6/hexapod/templates/figure_template.py#L226
The trace has the following format
```
{
"marker": {"color": INSERT_COLOR_HERE, "opacity": 1.0, "size": INSERT_SIZE_HERE},
"mode": "markers",
"name": "cog-projection",
"type": "scatter3d",
"x": [INSERT_COG_X_HERE],
"y": [INSERT_COG_Y_HERE,
"z": [0],
}
```
**3. `HexapodPlotter` could draw this point in the figure template**
https://github.com/mithi/hexapod-robot-simulator/blob/531fbb34d44246de3a0116def7e3d365de25b9f6/hexapod/plotter.py#L14 | 0easy
|
Title: Docs: little typo in documentation
Body: ### Summary
Here: https://docs.litestar.dev/2/usage/routing/overview.html
The text says
> If we are sending a request to the above with the url /some/sub-path, the handler will be invoked and the value of scope["path"] will equal "/`". If we send a request to /some/sub-path/abc, it will also be invoked,and scope["path"] will equal "/abc".
There is a typo on the root path: **"/`"** | 0easy
|
Title: [BUG] update() raises a `RevisionIdWasChanged` when expecting a `DuplicateKeyError`
Body: **Describe the bug**
When creating a `Document` with an unique indexed attribute, violating the uniqueness raise a `RevisionIdWasChanged` instead of a `DuplicateKeyError`.
**To Reproduce**
```python
import asyncio
from typing import Annotated
import pymongo
from beanie import Document, Indexed, init_beanie
from motor.motor_asyncio import AsyncIOMotorClient
client = AsyncIOMotorClient("mongodb://localhost:27017/test")
class MyDocument(Document):
unique: Annotated[str, Indexed(index_type=pymongo.TEXT, unique=True)]
async def test():
await init_beanie(client.get_default_database(), document_models=[MyDocument])
doc = MyDocument(unique="test")
await doc.save()
doc2 = MyDocument(unique="test")
await doc2.save()
if __name__ == "__main__":
asyncio.run(test())
```
**Expected behavior**
Raise the `DuplicateKeyError` instead of replacing it by a `RevisionIdWasChanged`.
**Additional context**
The bug comes from [documents.py, line 739/740](https://github.com/BeanieODM/beanie/blob/main/beanie/odm/documents.py#L739).
I feel like we could easily catch the error, ensure the `DuplicateKeyError` comes from a revision id issue, and either raise a `RevisionIdWasChanged` or this error.
Something like:
```python
except DuplicateKeyError as e:
raise RevisionIdWasChanged if "revision_id" in str(e) else e
```
I can prepare a PR for this, I just feel like my solution is a bit hacky right now... Any ideas ? | 0easy
|
Title: add information about version in the version API
Body: Please add information about Mercury version in `/api/v1/version` endpoint. | 0easy
|
Title: Remove `@next/font`
Body: NextJS warn this
> Your project has `@next/font` installed as a dependency, please use the built-in `next/font` instead. The `@next/font` package will be removed in Next.js 14. You can migrate by running `npx @next/codemod@latest built-in-next-font .`. Read more: https://nextjs.org/docs/messages/built-in-next-font
| 0easy
|
Title: [DOC] Types of Contributions broken link
Body: # Brief Description of Fix
<!-- Please describe the fix in terms of a "before" and "after". In other words, what's not so good about the current docs
page, and what you would like to see it become.
Example starter wording is provided. -->
Currently, the docs have a link within `CONTRIBUTING.rst` the links **types of Contributions** to https://github.com/ericmjl/pyjanitor/blob/dev/CONTRIBUTION_TYPES.html
I would like to propose a change, such that now the docs links the above to the docs website (https://ericmjl.github.io/pyjanitor/CONTRIBUTION_TYPES.html)
# Relevant Context
<!-- Please put here, in bullet points, links to the relevant docs page. A few starting template points are available
to get you started. -->
- [Link to documentation page](https://github.com/ericmjl/pyjanitor/blob/dev/CONTRIBUTING.rst)
- [Link to exact file to be edited](https://github.com/ericmjl/pyjanitor/blob/dev/CONTRIBUTING.rst)
| 0easy
|
Title: chore: emit deprecation warning when Response.add_link() is used
Body: | 0easy
|
Title: Running tests with `reverse` flag causes test failures
Body: **Describe the bug**
Tests should be able to run in [any order](https://docs.djangoproject.com/en/4.2/topics/testing/overview/#order-in-which-tests-are-executed) to minimise dependencies between tests.
However, running tests with the reverse flag causes test failures.
**To Reproduce**
```python
./manage.py test --reverse
```
**Versions (please complete the following information):**
- Django Import Export: 3.3.1
- Python 3.11
- Django 4.2
**Expected behavior**
Tests should pass with no issues
**This should be done on the `release-4` branch due to test refactorings**
| 0easy
|
Title: Better documentation of the options on `Map`
Body: At the moment, these options are poorly documented (cf issue #108). It would be good to document the main ones in the `Map` docstring.
The options available all come from the `ipywidgets.DomWidget` class. | 0easy
|
Title: Add endpoint `/graphs/{graph_id}/schedules`
Body: * Part of #8780
Since we're creating a per-agent view, we should allow fetching schedules per agent as well. Currently there is only an endpoint to fetch all of a user's agents' schedules. | 0easy
|
Title: Correct GraphQL field for wagtail RichTextField
Body: ```python
from wagtail.core.fields import RichTextField
@register_snippet
class TestSnippet(models.Model):
title = models.CharField(max_length=50)
description = RichTextField(
blank=True
)
panels = [
FieldPanel('title'),
FieldPanel('description'),
]
graphql_fields = [
GraphQLString('title', required=True),
GraphQLString('description', required=True),
]
```
I have been using `GraphQLString` for wagtail `RichTextField` which doesn't seem to be the most accurate way..as Richtext field output should ideally be passed through richtext method for the rendered content. E.g. In template there is richtext filter and [expand_db_html](https://github.com/wagtail/wagtail/blob/571b9e1918861bcbe64a9005a4dc65f3a1fe7a15/wagtail/core/rich_text/__init__.py#L24) method.
One example is when using apostrophe `'` inside richtext fields, it returns code instead of the string.
```shell
In [62]: object.text
Out[62]: '<p data-block-key="t4jac">dc csdc 'This is the string'</p>'
``` | 0easy
|
Title: Marketplace - hover state has the wrong bg color and truncate description
Body: ### Describe your issue.
Current card on the agent home page
<img width="507" alt="Image" src="https://github.com/user-attachments/assets/55d8e4c3-4486-419f-8d3c-4f1e766ce501" />
On this card on the home page, when you're hovering over it it shows a grey background color. Please fix to match the rest of the card color. As per the designs on the Figma.
<img width="300" alt="Image" src="https://github.com/user-attachments/assets/c67e693b-2983-4fb1-9b85-7d6162e05def" />
| 0easy
|
Title: unify docstrings to be PEP 257 compatible
Body: https://www.python.org/dev/peps/pep-0257/ lists some conventions how dostrings should be formatted. Most of the docstrings are compatible already, some are not.
Some are descriptive which violate the rule that a docstring should prescribe the function or method's effect as a command "Do this", "Return that", instead of "Returns the pathname ...".
| 0easy
|
Title: deploying with Docker - update dependencies
Body: I don't know if it needs to be reported but I encountered two errors when I tried to deploy with Docker.
The first one was:
```bash
> [mercury 7/16] RUN mamba install --yes --file mercury/requirements.txt -c conda-forge:
0.463 Traceback (most recent call last):
0.463 File "/opt/conda/bin/mamba", line 7, in <module>
0.464 from mamba.mamba import main
0.464 File "/opt/conda/lib/python3.11/site-packages/mamba/mamba.py", line 49, in <module>
0.464 import libmambapy as api
0.464 File "/opt/conda/lib/python3.11/site-packages/libmambapy/__init__.py", line 7, in <module>
0.464 raise e
0.464 File "/opt/conda/lib/python3.11/site-packages/libmambapy/__init__.py", line 4, in <module>
0.464 from libmambapy.bindings import * # noqa: F401,F403
0.464 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
0.464 ImportError: libarchive.so.13: cannot open shared object file: No such file or directory
```
so I added libarchive13 to the list of dependencies in the Dockerfile.
The second error was:
```bash
7.447 The following package could not be installed
7.447 └─ mercury is installable and it requires
7.447 └─ sqlalchemy 1.4.27 with the potential options
7.447 ├─ sqlalchemy 1.4.27 would require
7.447 │ └─ python >=3.10,<3.11.0a0 , which can be installed;
7.447 ├─ sqlalchemy 1.4.27 would require
7.447 │ └─ python >=3.7,<3.8.0a0 , which can be installed;
7.447 ├─ sqlalchemy 1.4.27 would require
7.447 │ └─ python >=3.8,<3.9.0a0 , which can be installed;
7.447 └─ sqlalchemy 1.4.27 would require
7.447 └─ python >=3.9,<3.10.0a0 , which can be installed.
```
I fixed it by adding python=3.8 to the line:
RUN mamba install --yes python=3.8 mercury -c
It's all good now 😄
| 0easy
|
Title: While Loading data around 2 gb with use_kenel_cal = True, the data is not displayed in streamlit
Body: | 0easy
|
Title: Add MaxAbsScaler Estimator
Body: The MaxAbScaler estimator scales the data by its maximum absolute value. Use the IncrementalBasicStatistics estimator
to generate the min and max to scale the data. Investigate where the new implementation may be low performance and
include guards in the code to use Scikit-learn as necessary. The final deliverable would be to add this estimator to the 'spmd'
interfaces which are effective on MPI-enabled supercomputers, this will use the underlying MPI-enabled minimum and maximum
calculators in IncrementalBasicStatistics. This is similar to the MinMaxScaler and can be combined into a small project.
This is an easy difficulty project.
https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MaxAbsScaler.html | 0easy
|
Title: Marketplace - agent page - this should be clickable
Body: ### Describe your issue.
<img width="589" alt="Screenshot 2024-12-17 at 18 56 41" src="https://github.com/user-attachments/assets/e310b6ed-bd44-4584-8a47-58c80c6ba3bd" />
The name of the creator should be clickable. If a user clicks on this, it should lead them to the creator's page.
Behavior when hovered: underline should appear
| 0easy
|
Title: Include and document MethodOverrideMiddleware
Body: http://www.django-rest-framework.org/topics/browser-enhancements/#http-header-based-method-overriding
We should probably include that middleware in django-shop and document that it should be enabled.
| 0easy
|
Title: Trasnalte Into English
Body: Hello!
Your project looks interesting. Please translate docs into English | 0easy
|
Title: Enhance programmatic API to create resource files
Body: The main motivation is making it easier to create resource files so that they can be converted to JSON. JSON support it self is #3902. `ResourceFile` already has `to_json` and `from_json` methods, but two additional things are needed to make it's usage more convenient:
- It needs to be exposed via `robot.running`.
- It needs `from_file_system`, `from_string` and `from_model` class methods that also `TestSuite` has.
After these enhancement this is possible:
```python
from robot.running import ResourceFile
resource = ResourceFile.from_file_system('example.resource')
resource.to_json('example.rsrc')
``` | 0easy
|
Title: Admin interface: Change display name
Body: Currently the display name field of a user in the admin interface is read-only.
Extend the functionality of the [admin/manage_user](https://github.com/LAION-AI/Open-Assistant/blob/main/website/src/pages/admin/manage_user/%5Bid%5D.tsx) page and allow editing of the display name. | 0easy
|
Title: Tox v4 does not consistently require to escape the #
Body: ## Issue
Using this in `deps`:
```
pylint-guidelines-checker @ git+https://github.com/Azure/azure-sdk-for-python#subdirectory=scripts/pylint_custom_plugin
```
works like a charm, but I was assuming I should escape based on the tox v4 updating doc to:
```
pylint-guidelines-checker @ git+https://github.com/Azure/azure-sdk-for-python\#subdirectory=scripts/pylint_custom_plugin
```
But this one actually crashes tox. My CI unfortunately didn't kept the log, but the message is something like `cannot clone https://github.com/Azure/azure-sdk-for-python\` (note the `\` at the end). If that matters, I can re-crash my CI on purpose to get the real error message.
I expect that when the tox updating doc says "The hash sign (#) now always acts as comment within tox.ini", then it's the case.
## Environment
Provide at least:
- OS: Linux or Windows
- `pip list` of the host Python where `tox` is installed:
```console
Jinja2-3.1.2 MarkupSafe-2.1.2 astroid-2.13.3 attrs-22.2.0 autorest-0.1.0 black-22.12.0 cachetools-5.3.0 chardet-5.1.0 click-8.1.3 colorama-0.4.6 dill-0.3.6 distlib-0.3.6 docutils-0.19 exceptiongroup-1.1.0 filelock-3.9.0 importlib-metadata-6.0.0 iniconfig-2.0.0 invoke-2.0.0 isort-5.11.4 json-rpc-1.14.0 lazy-object-proxy-1.9.0 m2r2-0.3.3 mccabe-0.7.0 mistune-0.8.4 mypy-0.991 mypy-extensions-0.4.3 nodeenv-1.7.0 packaging-23.0 pathspec-0.10.3 platformdirs-2.6.2 pluggy-1.0.0 ptvsd-4.3.2 pylint-2.15.10 pyproject-api-1.5.0 pyright-1.1.287 pytest-7.2.1 pyyaml-6.0 tomli-2.0.1 tomlkit-0.11.6 tox-4.3.5 typed-ast-1.5.4 types-PyYAML-6.0.12.3 typing-extensions-4.4.0 virtualenv-20.17.1 wrapt-1.14.1 zipp-3.11.0
```
I'm not blocked since I removed the backslah and it works. | 0easy
|
Title: Fix style issue `Doc line too long FLK-W505`
Body: Fix `Doc line too long FLK-W505`: https://deepsource.io/gh/scanapi/scanapi/issue/FLK-W505/occurrences

> Docstring line lengths are recommended to be no greater than 79 characters.
Child of https://github.com/scanapi/scanapi/issues/405 | 0easy
|
Title: Unit tests for reference.py
Body: - Need unit tests on a few functions in `testbook/reference.py`.
<img width="609" alt="Screenshot 2021-05-22 at 10 25 18 PM" src="https://user-images.githubusercontent.com/39169550/119234637-982e1e80-bb4c-11eb-8c33-eb463a37b8ff.png">
Command to generate above output:
```bash
$ cd testbook
$ pytest --cov=testbook --cov-report=html
$ open htmlcov/index.html # or just navigate to this directory using file explorer (or finder) and open index.html
``` | 0easy
|
Title: [BUG] Long filter values needs to be abbreviated
Body: **Describe the bug**
Filters titles with very long values are not abbreviated, resulting in very wide visualizations.
**To Reproduce**
```python
from lux.vis.Vis import Vis
long_content = "Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum."
dataset = [
{"long_attr":long_content, "normal": 3, "normal2": 1},
{"long_attr":long_content, "normal": 3, "normal2": 1},
{"long_attr":long_content, "normal": 2, "normal2": 1},
{"long_attr":long_content, "normal": 4, "normal2": 1}
]
test = pd.DataFrame(dataset)
vis = Vis(["normal2", "normal",f"long_attr={long_content}"], test)
```

This is also an issue for matplotlib but the results are cutoff:

**Expected behavior**
We should abbreviate the values on filter. We are in fact doing abbreviation for attribute values and legend. Perhaps there is some shared code that we can reuse across these different abbreviation usages?
| 0easy
|
Title: [Tech debt] Update interface for ImageCompression
Body: Right now in the transform we have separate parameters for `[quality_lower, quality_upper]`
Better would be to have one parameter `quality_range = [quality_lower, quality_upper]`
=>
We can update transform to use new signature, keep old as working, but mark as deprecated.
----
PR could be similar to https://github.com/albumentations-team/albumentations/pull/1615
| 0easy
|
Title: Do we have any script covert from hf format to orginal format?
Body: **Is your feature request related to a problem? Please describe.**
scripts/convert_cogvideox_to_diffusers.py
in this script, we can convert cogvideox -> diffusers. Do we have the opposite script?
cc @yiyixuxu
| 0easy
|
Title: Create a CLI option to pass a configuration file path
Body: ## Description
Create a CLI option to pass a configuration file path. Something like:
```bash
scanapi --config-path <path_to_config_file>
```
Current this file is called `SETTINGS_FILE`:
https://github.com/scanapi/scanapi/blob/f27b031aa2f1fb4e9c85957e2c059733fabe69b8/scanapi/settings.py#L5
Here is where we add new arguments to the CLI:
https://github.com/scanapi/scanapi/blob/f27b031aa2f1fb4e9c85957e2c059733fabe69b8/scanapi/__init__.py#L13 | 0easy
|
Title: possible coding error on bbands.py
Body: **Which version are you running? The lastest version is on Github. Pip is for major releases.**
```python
import pandas_ta as ta
print(ta.version)
```
0.3.14b0
**Upgrade.**
```sh
$ pip install -U git+https://github.com/twopirllc/pandas-ta
```
**Describe the bug**
Possible coding error at line 45 at pandas-ta/pandas_ta/volatility/bbands.py
```python
percent = bandwidth.shift(offset)
```
**Expected behavior**
PERCENT = (close - LOWER) / (UPPER - LOWER) with shift(offset)
```python
# Offset
if offset != 0:
lower = lower.shift(offset)
mid = mid.shift(offset)
upper = upper.shift(offset)
bandwidth = bandwidth.shift(offset)
percent = percent.shift(offset)
```
| 0easy
|
Title: Editing notes causes cursor to always reset to bottom
Body: when you try to edit text at the top of a note, the cursor moves to the bottom after any change.
(Video to better show it)
[2025-01-13 11-43-39.mp4](https://uploads.linear.app/a47946b5-12cd-4b3d-8822-df04c855879f/352c6c9b-889d-4766-99b5-b59860e7d761/d7700743-9e19-4d37-a549-9cf1156a2b43?signature=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJwYXRoIjoiL2E0Nzk0NmI1LTEyY2QtNGIzZC04ODIyLWRmMDRjODU1ODc5Zi8zNTJjNmM5Yi04ODlkLTQ3NjYtOTliNS1iNTk4NjBlN2Q3NjEvZDc3MDA3NDMtOWUxOS00ZDM3LWE1NDktOWNmMTE1NmEyYjQzIiwiaWF0IjoxNzM2NzY5OTg5LCJleHAiOjMzMzA3MzI5OTg5fQ._COT8ZKB4-Ra2dglEaz7wezYtK_neFLOO02NW-dg5MI) | 0easy
|
Title: [SDK] improve PVC error message
Body: ### What happened?
while trying to implement a notebook on local with the recent llm hp python sdk, i faced the below error
```
File ~/miniconda3/envs/llm-hp-optimization-katib-nb/lib/python3.9/site-packages/kubeflow/katib/api/katib_client.py:580, in KatibClient.tune(self, name, model_provider_parameters, dataset_provider_parameters, trainer_parameters, storage_config, objective, base_image, parameters, namespace, env_per_trial, algorithm_name, algorithm_settings, objective_metric_name, additional_metric_names, objective_type, objective_goal, max_trial_count, parallel_trial_count, max_failed_trial_count, resources_per_trial, retain_trials, packages_to_install, pip_index_url, metrics_collector_config)
578 break
579 else:
--> 580 raise RuntimeError(f"failed to create PVC. Error: {e}")
582 if isinstance(model_provider_parameters, HuggingFaceModelParams):
583 mp = "hf"
RuntimeError: failed to create PVC. Error: (422)
Reason: Unprocessable Entity
HTTP response headers: HTTPHeaderDict({'Audit-Id': '2abbe5b3-07d0-4254-b710-67520a09c45b', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Kubernetes-Pf-Flowschema-Uid': 'f421b4f1-b00a-449d-981b-fd500b3697db', 'X-Kubernetes-Pf-Prioritylevel-Uid': '225da36a-6099-4970-8c35-95547fa53796', 'Date': 'Thu, 16 Jan 2025 13:48:00 GMT', 'Content-Length': '948'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"PersistentVolumeClaim \"Llama-3.2-fine-tune\" is invalid: metadata.name: Invalid value: \"Llama-3.2-fine-tune\": a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')","reason":"Invalid","details":{"name":"Llama-3.2-fine-tune","kind":"PersistentVolumeClaim","causes":[{"reason":"FieldValueInvalid","message":"Invalid value: \"Llama-3.2-fine-tune\": a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')","field":"metadata.name"}]},"code":422}
```
### What did you expect to happen?
i expect the error be more clear for the user, preferable not returning the raw body of the response - and then document the correct usage of the api
json formatted of the error:
```json
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"PersistentVolumeClaim \"Llama-3.2-fine-tune\" is invalid: metadata.name: Invalid value: \"Llama-3.2-fine-tune\": a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')","reason":"Invalid","details":{"name":"Llama-3.2-fine-tune","kind":"PersistentVolumeClaim","causes":[{"reason":"FieldValueInvalid","message":"Invalid value: \"Llama-3.2-fine-tune\": a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')","field":"metadata.name"}]},"code":422}
```
### Environment
Kubernetes version:
```bash
$ kubectl version
```
Katib controller version:
```bash
$ kubectl get pods -n kubeflow -l katib.kubeflow.org/component=controller -o jsonpath="{.items[*].spec.containers[*].image}"
```
Katib Python SDK version:
```bash
$ pip show kubeflow-katib
```
### Impacted by this bug?
Give it a 👍 We prioritize the issues with most 👍 | 0easy
|
Title: Misleading Error "Please install necessary libs for CatBoostModel."
Body: ## 🐛 Bug Description
Qlib does not require the installation of packages like `CatBoostModel`
But the output looks a little misleading.
## To Reproduce
Run `examples/workflow_by_code.ipynb` in jupyter notebook.
## Expected Behavior
Successfully run the script without installing CatBoostModel and warning.
## Screenshot

<!-- A screenshot of the error message or anything shouldn't appear-->
| 0easy
|
Title: tutorials for hyp.tools
Body: a jupyter notebook for each of the various tools demonstrating how they can be used in conjunction with the plotting functions. could be organized as:
+ dimensionality reduction
+ hyperalignment + procrustes
+ clustering
+ describe_pca
+ other | 0easy
|
Title: Change Requests metric API
Body: The canonical definition is here: https://chaoss.community/?p=3610 | 0easy
|
Title: Support creating variable name based on another variable like `${${VAR}}` in Variables section
Body: Example:
```robotframework
*** Variables ***
${X} Y # Normal assignment. Creates ${X} with value Y.
${${X}} Z # Assignment based on another variable. Creates ${Y} with value Z.
```
This is likely not that widely needed feature, but we plan to support this approach with the new `VAR` syntax (#3761) and we also have an issue about supporting it when creating variables based on keyword return values (#4545). Especially the `VAR` usage will be very close to what proposed above and supporting the same in the Variables section is good for consistency.
```
VAR ${X} Y # Normal assignment. Creates ${X} with value Y.
VAR ${${X}} Z # Assignment based on another variable. Creates ${Y} with value Z.
```
Implementation shares most of the same code that ´VAR` uses, so the extra effort is really small. | 0easy
|
Title: Implement new features of FlaskSecurity
Body: Flask Security implemented " Send me email to login " should be implemented, also all the email advice features/templates.
| 0easy
|
Title: Retry exponential backoff max float overflow
Body: ### Apache Airflow version
Other Airflow 2 version (please specify below)
### If "Other Airflow 2 version" selected, which one?
2.10.3
### What happened?
Hello,
I encountered with a bug. My DAG configs were: retries=1000, retry_delay=5 min (300 seconds), max_retry_delay=1h (3600 seconds). My DAG failed ~1000 times and after that Scheduler broke down. After that retries exceeded 1000 and stopped on 1017 retry attempt.
I did my research on this problem and found that this happened due to formula **min_backoff = math.ceil(delay.total_seconds() * (2 ** (self.try_number - 1)))** in **taskinstance.py** file. So retry_exponential_backoff has no limit of try_number and during calculations it can overflow max Float value. So even if max_retry_delay is set formula is still calculating. And during calculations on very large retry number it crashes.
Please fix bug.
I also did pull request with my possible solution:
https://github.com/apache/airflow/pull/48057
https://github.com/apache/airflow/pull/48051
From Airflow logs:
2024-12-09 02:16:39.825 OverflowError: cannot convert float infinity to integer
2024-12-09 02:16:39.825 min_backoff = int(math.ceil(delay.total_seconds() * (2 ** (self.try_number - 2))))
2024-12-08 09:29:14.583 [2024-12-08T06:29:14.583+0000] {scheduler_job_runner.py:705} INFO - Executor reports execution of mydag.spark_submit run_id=manual__2024-11-02T10:19:30.618008+00:00 exited with status up_for_retry for try_number 470
Configs:
_with DAG(
dag_id=DAG_ID,
start_date=MYDAG_START_DATE,
schedule_interval="@daily",
catchup=AIRFLOW_CATCHUP,
default_args={
'depends_on_past': True,
"retries": 1000,
"retry_delay": duration(minutes=5),
"retry_exponential_backoff": True,
"max_retry_delay": duration(hours=1),
},
) as dag:_
<img width="947" alt="Image" src="https://github.com/user-attachments/assets/f3307b23-0307-4b4d-b968-3e1984fbe93c" />
<img width="1050" alt="Image" src="https://github.com/user-attachments/assets/f161329c-155c-4d92-b3b4-cf442d6ed036" />
### What you think should happen instead?
My pull request:
https://github.com/apache/airflow/pull/48057
https://github.com/apache/airflow/pull/48051
### How to reproduce
Use configs from above. Example:
with DAG(
dag_id=DAG_ID,
start_date=MY_AIRFLOW_START_DATE,
schedule_interval="@daily",
catchup=AIRFLOW_CATCHUP,
default_args={
'depends_on_past': True,
"retries": 1000,
"retry_delay": duration(minutes=5),
"retry_exponential_backoff": True,
"max_retry_delay": duration(hours=1),
},
) as dag
### Operating System
Ubuntu 22.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| 0easy
|
Title: [WARNING] HypothesisDeprecationWarning: `HealthCheck.all()` is deprecated
Body: **Checklist**
- [x] I checked the [FAQ section](https://schemathesis.readthedocs.io/en/stable/faq.html#frequently-asked-questions) of the documentation
- [x] I looked for similar issues in the [issue tracker](https://github.com/schemathesis/schemathesis/issues)
**Describe the bug**
Running either the `st` command line tool or in my own pytest suite, I receive a deprecation warning:
```
.../venv/lib/python3.10/site-packages/hypothesis/_settings.py:467: HypothesisDeprecationWarning: `Healthcheck.all()` is deprecated; use `list(HealthCheck)` instead.
The `hypothesis codemod` command-line tool can automatically refactor your code to fix this warning.
note_deprecation(
```
The warning isn't reproduced on all endpoints; however, as shown below, it is reported at least on the POST operation on https://example.schemathesis.io/api/payload .
**To Reproduce**
Steps to reproduce the behavior:
1. Run `st run --endpoint /api/payload https://example.schemathesis.io/openapi.json`
2. Output:
```
================================= Schemathesis test session starts =================================
Schema location: https://example.schemathesis.io/openapi.json
Base URL: https://example.schemathesis.io/api
Specification version: Open API 3.0.2
Workers: 1
Collected API operations: 1
.../venv/lib/python3.10/site-packages/hypothesis/_settings.py:467: HypothesisDeprecationWarning: `Healthcheck.all()` is deprecated; use `list(HealthCheck)` instead.
The `hypothesis codemod` command-line tool can automatically refactor your code to fix this warning.
note_deprecation(
POST /api/payload . [100%]
============================================= SUMMARY ==============================================
Performed checks:
not_a_server_error 101 / 101 passed PASSED
Hint: You can visualize test results in Schemathesis.io by using `--report` in your CLI command.
======================================== 1 passed in 12.76s ========================================
```
**Expected behavior**
No warnings to be shown.
**Environment (please complete the following information):**
- OS: MacOS 12.6.3
- Python version: 3.10.10
- Schemathesis version: 3.19.1
- Hypothesis version: 6.75.3
- Spec version: Open API 3.0.2
| 0easy
|
Title: Configuration option to specify default serializer plugin
Body: **Is your feature request related to a problem? Please describe.**
Currently you have to override the snapshot fixture in your project's root conftest.py in order to specify the default serializer class. It's potentially not clear what the final serializer is (order of shadowing).
**Describe the solution you'd like**
A configuration option to specify the python module path to load for the default serializer.
```
--snapshot-default-plugin syrupy.extensions.image.SVGImageExtension
```
**Describe alternatives you've considered**
Adding documentation for overriding the default fixture.
| 0easy
|
Title: SQLAlchemy must create parent folders when initializing sqlite connection
Body: Same as #281 but for SQLAlchemy client:
https://github.com/ploomber/ploomber/blob/088c7f2b3605b4f624f804e206078d7d8d35baf5/src/ploomber/clients/db.py#L107
We can use the `flavor` property in the constructor to determine whether we are dealing with a sqlite db or not. | 0easy
|
Title: ICML 2023
Body: I would happily review a PR including a contribution of the ICML 2023 files. The call for papers (including a style file) is out now: https://icml.cc/Conferences/2023/CallForPapers.
Most, if not all of the configurations should be the same as in the last year(s). | 0easy
|
Title: Update request-id recipe to use `contextvars` instead of `threading.local()`
Body: [`contextvars`](https://docs.python.org/3/library/contextvars.html) is a newer module in the stdlib (3.7+) that is more advanced than `theading.local()` as it not only understand threads, but also `asyncio` coroutines, etc.
Update the [Request ID Logging](https://falcon.readthedocs.io/en/stable/user/recipes/request-id.html#request-id-logging) recipe to use `contextvars`.
**Edit**: also add tests for this recipe. | 0easy
|
Title: Add the ability to test whether data is changing/jumping beyond a tolerance
Body: # Brief Description
Following up on #703, this issue seeks to introduce the ability to test whether in each column in a data frame the change between every two rows is within a user-defined range of values.
I would like to propose..
# Example API
```python
import pandas as pd
import numpy as np
# Create a random data frame
df = pd.DataFrame(data=[["2015-01-01 00:00:00", -0.76, 2, 2, 1.2],
["2015-01-01 01:00:00", -0.73, 2, 4, 1.1],
["2015-01-01 02:00:00", -0.71, 2, 4, 1.1],
["2015-01-01 03:00:00", -0.68, 2, 32, 1.1],
["2015-01-01 04:00:00", -0.65, 2, 2, 1.0],
["2015-01-01 05:00:00", -0.76, 2, 2, 1.2],
["2015-01-01 06:00:00", -0.73, 2, 4, 1.1],
["2015-01-01 07:00:00", -0.71, 2, 4, 1.1],
["2015-01-01 08:00:00", -0.68, 2, 32, 1.1],
["2015-01-01 09:00:00", -0.65, 2, 2, 1.0],
["2015-01-01 10:00:00", -0.76, 2, 2, 1.2],
["2015-01-01 11:00:00", -0.73, 2, 4, 1.1],
["2015-01-01 12:00:00", -0.71, 2, 4, 1.1],
["2015-01-01 13:00:00", -0.68, 2, 32, 1.1],
["2015-01-01 14:00:00", -0.65, 2, 2, 1.0],
["2015-01-01 15:00:00", -0.76, 2, 2, 1.2],
["2015-01-01 16:00:00", -0.73, 2, 4, 1.1],
["2015-01-01 17:00:00", -0.71, 2, 4, 1.1],
["2015-01-01 18:00:00", -0.68, 2, 32, 1.1],
["2015-01-01 19:00:00", -0.65, 2, 2, 1.0],
["2015-01-01 20:00:00", -0.76, 2, 2, 1.2],
["2015-01-01 21:00:00", -0.73, 2, 4, 1.1],
["2015-01-01 22:00:00", -0.71, 2, 4, 1.1],
["2015-01-01 23:00:00", -0.68, 2, 32, 1.1],
["2015-01-02 00:00:00", -0.65, 2, 2, 1.0]],
columns=['DateTime', 'column1', 'column2', 'column3', 'column4'])
# Set the index
df["DateTime"] = pd.to_datetime(df["DateTime"])
df.set_index("DateTime", inplace=True)
# Flag functionality
df1 = df.copy(deep=True)
def flag_jumps(
df_input: pd.DataFrame,
tolerance: float = 10**-2
) -> pd.DataFrame:
"""
Returns a data frame the size of the input
Flags rows in each column where tolerance is violated
:param df_input: input data frame to test for jumps
:param tolerance: acceptable value of tolerance
:return: data frame with flags indicating whether tolerance has been violated or not
"""
# Calculate the difference between every two rows
df2 = df_input.diff()
# Check for tolerance violation
df3 = df2.gt(tolerance)
return df3
jump_flags = flag_jumps(df1, tolerance=2)
print(jump_flags)
```
| 0easy
|
Title: [k8s][gcp] Accept non-k8s TPU names
Body: Running `sky launch` with TPUs on GCP support the following TPU names:
```
GOOGLE_TPU AVAILABLE_QUANTITIES
tpu-v2-8 1
tpu-v3-8 1
tpu-v4-8 1
tpu-v4-16 1
tpu-v4-32 1
tpu-v5litepod-1 1
tpu-v5litepod-4 1
tpu-v5litepod-8 1
tpu-v5p-8 1
tpu-v5p-16 1
tpu-v5p-32 1
tpu-v6e-1 1
tpu-v6e-4 1
tpu-v6e-8 1
```
However, when running `sky launch` on a GKE k8s cluster, the accepted TPU names are a bit different, as described in ttps://cloud.google.com/kubernetes-engine/docs/how-to/tpus#run. SkyPilot hardcodes these mapping in `GKE_TPU_ACCELERATOR_TO_GENERATION`.
```
# Mapping used to get generation for TPU accelerator name.
# https://cloud.google.com/kubernetes-engine/docs/how-to/tpus#run
GKE_TPU_ACCELERATOR_TO_GENERATION = {
'tpu-v4-podslice': 'v4',
# Only Single-host v5e TPU configurations are allowed.
'tpu-v5-lite-device': 'v5e',
# Multi-host compatible v5e TPU configurations allowed.
'tpu-v5-lite-podslice': 'v5e',
'tpu-v5p-slice': 'v5p',
'tpu-v6e-slice': 'v6e',
}
```
What ends up happening is that if you want to launch a cluster in GKE with 8 v6e TPUs, you would need to specify `--gpus tpu-v6e-slice:8`, not `--gpus tpu-v6e-8`. In fact, specifying `--gpus tpu-v6e-8` would fail to match a GKE node with 8 `tpu-v6e-slice`s.
For user convenience, we should let a user launch cluster/job on k8s nodes with the non-k8s TPU names - i.e. launching a cluster on GKE with `--gpus tpu-v6e-slice:8` or `--gpus tpu-v6e-8` should have the same effect. | 0easy
|
Title: add new sample/s/gpu metric
Body: will help having comparable numbers | 0easy
|
Title: MEDIA_URL in the test settings
Body: ## What happened?
When running the tests in CI, this happens:
```
ERRORS:
?: (urls.E006) The MEDIA_URL setting must end with a slash.
```
## What should've happened instead?
Successfully run the tests. I believe we need to add a trailing slash to `MEDIA_URL`. I can submit a PR. | 0easy
|
Title: Add Flower Baseline: Floco
Body: ### Paper
Dennis Grinwald, Philipp Wiesner, Shinichi Nakajima. Federated Learning over Connected Modes (NeurIPS'25)
### Link
https://openreview.net/pdf?id=JL2eMCfDW8
### Maybe give motivations about why the paper should be implemented as a baseline.
Floco is an effective method for improving personalized model performance in the non-IID cross-silo federated learning setting compared to many state-of-the-art personalized FL methods while increasing global model performance compared to FedAvg and FedProx. It achieves this by training a shared parameter simplex, called solution simplex, across clients. This approach assigns similar clients to similar regions within the simplex, leading to better collaboration of similar clients, i.e. clients with similar data distributions, and less interference with dissimilar clients.
The goal of this issue is to reproduce the SimpleCNN CIFAR-10 results from the original paper. The resulting code can be easily extended to the full experimental setup of the paper.
### Is there something else you want to add?
_No response_
### Implementation
#### To implement this baseline, it is recommended to do the following items in that order:
### For first time contributors
- [x ] Read the [`first contribution` doc](https://flower.ai/docs/first-time-contributors.html)
- [ x] Complete the Flower tutorial
- [ x] Read the Flower Baselines docs to get an overview:
- [x ] [How to use Flower Baselines](https://flower.ai/docs/baselines/how-to-use-baselines.html)
- [ x] [How to contribute a Flower Baseline](https://flower.ai/docs/baselines/how-to-contribute-baselines.html)
### Prepare - understand the scope
- [x] Read the paper linked above
- [x] Decide which experiments you'd like to reproduce. The more the better!
- [x] Follow the steps outlined in [Add a new Flower Baseline](https://flower.ai/docs/baselines/how-to-contribute-baselines.html#add-a-new-flower-baseline).
- [x] You can use as reference [other baselines](https://github.com/adap/flower/tree/main/baselines) that the community merged following those steps.
### Verify your implementation
- [x] Follow the steps indicated in the `EXTENDED_README.md` that was created in your baseline directory
- [x] Ensure your code reproduces the results for the experiments you chose
- [x] Ensure your `README.md` is ready to be run by someone that is no familiar with your code. Are all step-by-step instructions clear?
- [x] Ensure running the formatting and typing tests for your baseline runs without errors.
- [x] Clone your repo on a new directory, follow the guide on your own `README.md` and verify everything runs. | 0easy
|
Title: 建议添加ios浏览器input禁止缩放的代码~
Body: 1. ios端,点击数量或者输入邮箱时,会放大,会出现难看的滚动条~
2. 解决方法,input的字体,调为16px,然后添加<meta>和Js,我自己调整的如下:
<script>
// 禁用缩放
function addMeta() {
$('head').append('<meta name="viewport" content="width=device-width,initial-scale=1,minimum-scale=1,maximum-scale=1,user-scalable=no" />');} setTimeout(addMeta, 3000);
// 禁用双指放大
document.documentElement.addEventListener('touchstart', function (event) {
if (event.touches.length > 1) {
event.preventDefault();
} }, {
passive: false
});
// 禁用双击放大
var lastTouchEnd = 0;
document.documentElement.addEventListener('touchend', function (event) {
var now = Date.now();
if (now - lastTouchEnd <= 300) {
event.preventDefault(); }
lastTouchEnd = now; }, {passive: false });
</script>
<style>
input{font-size:16px !important}
</style> | 0easy
|
Title: No module named 'scrapling.default'
Body: ### Have you searched if there an existing issue for this?
- [X] I have searched the existing issues
### Python version (python --version)
Python 3.11.0
### Scrapling version (scrapling.__version__)
0.2.7
### Dependencies version (pip3 freeze)
agate==1.7.0
aiofiles==24.1.0
aiohttp==3.9.1
aiohttp-retry==2.8.3
aiohttp-socks==0.8.4
aiometer==0.5.0
aioquic==1.2.0
aiosignal==1.3.1
annotated-types==0.6.0
anyio==4.2.0
appdirs==1.4.4
arrow==1.3.0
asgiref==3.8.1
asttokens==2.4.0
async-property==0.2.2
async-timeout==4.0.3
attrs==23.1.0
Authlib==1.3.2
azure-core==1.31.0
azure-storage-blob==12.13.0
babel==2.16.0
backcall==0.2.0
bcrypt==4.2.0
beautifulsoup4==4.12.2
betterproto==1.2.5
bleach==6.1.0
blinker==1.7.0
boto3==1.26.165
botocore==1.29.165
boxsdk==2.10.0
braintree==4.17.1
Brotli==1.1.0
browser-cookie3==0.19.1
browserforge==1.1.2
bs4==0.0.1
cachetools==5.5.0
cairocffi==1.6.1
CairoSVG==2.7.1
camoufox==0.4.4
cattrs==23.2.1
censusgeocode==0.4.3.post1
certifi==2024.8.30
cffi==1.16.0
chardet==5.2.0
charset-normalizer==3.3.1
chromedriver-autoinstaller==0.6.4
civis==1.16.1
click==8.1.3
cloudpickle==2.2.1
colorama==0.4.6
colorthief==0.2.1
comm==0.1.4
contourpy==1.2.0
cryptography==43.0.0
cssselect==1.2.0
cssselect2==0.7.0
curl_cffi==0.7.1
curlify==2.2.1
cycler==0.12.1
dbt-core==1.4.9
dbt-extractor==0.4.1
dbt-postgres==1.4.9
dbt-redshift==1.4.0
debugpy==1.7.0
decorator==5.1.1
defusedxml==0.7.1
Deprecated==1.2.14
discord-webhook==1.3.0
docopt==0.6.2
docutils==0.17.1
duckduckgo_search==4.5.0
environs==9.5.0
ephem==4.1.5
et-xmlfile==1.1.0
executing==1.2.0
exrex==0.11.0
facebook-business==13.0.0
fake-useragent==1.4.0
Faker==26.0.0
fastapi==0.109.1
fastf1==3.1.5
fastjsonschema==2.19.1
feedparser==6.0.11
filelock==3.16.1
Flask==3.0.2
Flask-Cors==4.0.0
flask-localtunnel==1.0.7
Flask-SQLAlchemy==3.1.1
fonttools==4.45.0
frozenlist==1.4.1
future==1.0.0
fuzzywuzzy==0.18.0
g4f==0.2.2.7
geographiclib==2.0
geopy==2.4.1
gevent==24.2.1
google-api-core==2.20.0
google-api-python-client==1.7.7
google-auth==2.29.0
google-auth-httplib2==0.2.0
google-auth-oauthlib==1.2.1
google-cloud-bigquery==3.23.1
google-cloud-core==2.4.1
google-cloud-storage==2.16.0
google-cloud-storage-transfer==1.9.1
google-crc32c==1.6.0
google-resumable-media==2.7.0
googleapis-common-protos==1.65.0
greenlet==3.1.1
grequests==0.7.0
grpcio==1.62.2
grpcio-status==1.62.2
grpclib==0.4.7
gspread==3.7.0
h11==0.14.0
h2==4.1.0
hologram==0.0.16
hpack==4.0.0
html5lib==1.1
httpcore==1.0.6
httplib2==0.22.0
httpx==0.27.2
humanize==4.9.0
hyperframe==6.0.1
ics==0.7.2
idna==3.4
importlib-metadata==7.0.1
inflection==0.5.1
iniconfig==2.0.0
ipykernel==6.25.2
ipython==8.12.3
isodate==0.6.1
itsdangerous==2.1.2
jedi==0.19.0
jellyfish==0.11.2
Jinja2==3.1.2
jmespath==1.0.1
joblib==1.2.0
jsonref==0.2
jsonschema==4.21.1
jsonschema-specifications==2023.12.1
jupyter_client==8.3.1
jupyter_core==5.3.1
jupyterlab_pygments==0.3.0
kaitaistruct==0.10
kaleido==0.2.1
kiwisolver==1.4.5
language-tags==1.2.0
ldap3==2.9.1
leather==0.4.0
Levenshtein==0.24.0
llvmlite==0.41.1
lml==0.1.0
Logbook==1.5.3
loguru==0.7.2
lxml==5.1.0
lz4==4.3.3
markdown-it-py==3.0.0
MarkupSafe==2.1.5
marshmallow==3.20.1
mashumaro==3.3.1
matplotlib==3.8.2
matplotlib-inline==0.1.6
mdurl==0.1.2
minimal-snowplow-tracker==0.0.2
mistune==3.0.2
mitmproxy==10.4.2
mitmproxy-windows==0.6.3
mitmproxy_rs==0.6.3
mmh3==4.1.0
msgpack==1.0.8
msrest==0.7.1
multidict==6.0.4
mysql-connector==2.2.9
mysql-connector-python==8.0.18
mysqlclient==2.2.4
nbclient==0.9.0
nbconvert==7.16.2
nbformat==5.9.2
nest-asyncio==1.5.7
networkx==2.8.8
newmode==0.1.6
numba==0.58.1
numpy==1.26.1
oauth2client==4.1.3
oauthlib==3.2.2
openpyxl==3.0.10
openskill==5.1.1
orjson==3.10.12
outcome==1.3.0.post0
packaging==23.1
pandas==2.0.3
pandocfilters==1.5.1
paramiko==3.4.0
parse==1.20.1
parsedatetime==2.4
parso==0.8.3
parsons==3.2.0
passlib==1.7.4
pathspec==0.10.3
petl==1.7.15
pickleshare==0.7.5
Pillow==10.1.0
pip==24.3.1
pipreqs==0.5.0
platformdirs==3.10.0
playwright==1.48.0
plotly==5.17.0
pluggy==1.4.0
polars==0.19.16
prompt-toolkit==3.0.39
proto-plus==1.24.0
protobuf==4.25.5
psutil==5.9.5
psycopg2==2.9.9
psycopg2-binary==2.9.9
publicsuffix2==2.20191221
pure-eval==0.2.2
py-arkose-generator==0.0.0.2
py-localtunnel==1.0.3
pyairtable==2.3.3
pyarrow==14.0.1
pyasn1==0.6.0
pyasn1_modules==0.4.0
pycountry==24.6.1
pycparser==2.21
pycryptodome==3.20.0
pycryptodomex==3.20.0
pydantic==2.6.0
pydantic_core==2.16.1
pydivert==2.1.0
pyee==12.0.0
pyexcel==0.7.0
pyexcel-ezodf==0.3.4
pyexcel-io==0.6.6
pyexcel-ods3==0.6.1
pyexcel-xlsxw==0.6.1
PyExecJS==1.5.1
PyGithub==1.51
Pygments==2.16.1
PyJWT==2.8.0
pylsqpack==0.3.18
PyNaCl==1.5.0
pyOpenSSL==24.2.1
pyparsing==3.1.1
pyperclip==1.9.0
pypiwin32==223
pyppeteer==1.0.2
pyquery==2.0.0
PySocks==1.7.1
pytest==7.4.4
pytest-base-url==2.1.0
pytest-playwright==0.4.3
pythermalcomfort==2.8.8
python-dateutil==2.8.2
python-dotenv==1.0.0
python-Levenshtein==0.24.0
python-slugify==8.0.3
python-socks==2.4.4
pytimeparse==1.1.8
pytz==2023.3.post1
pyusb==1.2.1
pywin32==306
PyYAML==6.0.2
pyzmq==25.1.1
rapidfuzz==3.5.2
rebrowser_playwright==1.48.100
referencing==0.33.0
requests==2.31.0
requests-cache==1.1.1
requests-file==2.1.0
requests-futures==1.0.1
requests-html==0.10.0
requests-oauthlib==1.3.0
requests-toolbelt==0.10.1
rich==13.9.4
rpds-py==0.18.0
rsa==4.9
ruamel.yaml==0.18.6
ruamel.yaml.clib==0.2.8
s3transfer==0.6.2
scipy==1.11.3
scrapling==0.2.7
screeninfo==0.8.1
selenium==4.25.0
selenium-wire==5.1.0
service-identity==24.1.0
setuptools==70.0.0
sgmllib3k==1.0.0
simple-salesforce==1.11.6
simplejson==3.16.0
six==1.16.0
slackclient==1.3.0
sniffio==1.3.0
sortedcontainers==2.4.0
soupsieve==2.5
SQLAlchemy==1.4.54
sqlalchemy-cockroachdb==2.0.2
sqlparse==0.4.3
sshtunnel==0.4.0
stack-data==0.6.2
starlette==0.35.1
stem==1.8.2
stringcase==1.2.0
stripe==8.2.0
suds-py3==1.3.4.0
SurveyGizmo==1.2.3
tabulate==0.9.0
TatSu==5.11.3
tenacity==8.2.3
text-unidecode==1.3
texttable==1.7.0
thefuzz==0.20.0
timple==0.1.7
tinycss2==1.2.1
tldextract==5.1.3
tornado==6.4.1
torrequest==0.1.0
tqdm==4.66.1
traitlets==5.9.0
trio==0.24.0
trio-websocket==0.11.1
twilio==8.2.1
types-python-dateutil==2.8.19.20240106
typing_extensions==4.9.0
tzdata==2023.3
ua-parser==1.0.0
ua-parser-builtins==0.18.0
undetected-chromedriver==3.5.5
uritemplate==3.0.1
url-normalize==1.4.3
urllib3==1.26.19
urwid==2.6.15
us==3.1.1
uvicorn==0.27.0.post1
validate-email==1.3
w3lib==2.1.2
wcwidth==0.2.6
webdriver-manager==4.0.2
webencodings==0.5.1
websocket-client==1.8.0
websockets==10.4
Werkzeug==2.3.8
whatsapp-converter==0.6.4
wheel==0.41.3
win10toast==0.9
win32-setctime==1.1.0
wrapt==1.16.0
wsproto==1.2.0
XlsxWriter==3.2.0
xmltodict==0.11.0
yarg==0.1.9
yarl==1.9.4
zeep==4.2.1
zipp==3.17.0
zope.event==5.0
zope.interface==6.4.post2
zstandard==0.23.0
### What's your operating system?
Windows 11
### Are you using a separate virtual environment?
No
### Expected behavior
just for the import statement "scrapling.default" to work
### Actual behavior (Remember to use `debug` parameter)
No module named 'scrapling.default'
### Steps To Reproduce
1. open a new folder
2. new venv
3. install scrapling | 0easy
|
Title: Add as_caption_kwargs() method to aiogram.utils.formatting.Text
Body: ### aiogram version
3.x
### Problem
Methods like `Bot.send_document() ` use `caption` and `caption_enteties` parameters instead of `text` and `entities`. That results in repeating `Text(...).as_kwargs(text_key="caption", entities_key="caption_entities")` calls.
### Possible solution
I suggest introducing a shortcut method `as_caption_kwargs()` or something like that
### Alternatives
_No response_
### Code example
```python3
```
### Additional information
_No response_ | 0easy
|
Title: added tests
Body:
### Description
added some tests to test the functionality of igel
| 0easy
|
Title: Required annotated types (uuid in this case) in pydantic are returning a "uuid_type" error when not provided, when they should instead return a "missing" error
Body: ### Initial Checks
- [X] I confirm that I'm using Pydantic V2
### Description
Here is the class definition, some_id being the culprit property:
```py
class ExampleModel(BaseModel):
name: str = Field(..., max_length=255)
some_id: Annotated[UUID4, Strict(False)] = Field(...)
```
If I POST to the API with:
```
{
"name" : None
"some_id" : None
}
```
I get two errors as expected. The error for "name" is of type "missing", also as expected, because it is missing.
The error for "some_id", however, returns an error with type "uuid_type", when it should also be returning "missing".
### Example Code
_No response_
### Python, Pydantic & OS Version
```Text
pydantic version: 2.9.2
pydantic-core version: 2.23.4
pydantic-core build: profile=release pgo=false
install path: /Users/claytonstetz/.pyenv/versions/3.12.6/lib/python3.12/site-packages/pydantic
python version: 3.12.6 (main, Sep 21 2024, 22:00:30) [Clang 14.0.3 (clang-1403.0.22.14.1)]
platform: macOS-13.0-arm64-arm-64bit
related packages: pydantic-extra-types-2.9.0 fastapi-0.115.2 typing_extensions-4.12.2
commit: unknown
```
```
| 0easy
|
Title: Question on Web Scraper
Body: Hello there I hope all is well.
Thank you so much for:
https://github.com/tattooday/web-scraping/blob/master/CME2.py
Its helping a lot, as I am basically looking for a python scraper to scrape futures prices. Just one question for you.
In your code, you designate code numbers for silver gold palladium and copper to be:
458, 437, 445, and 438
I am curious where I would go to find these sequence matchups? Along with find more alternatives and associated codes so I can scrape them too, such as wheat, oil, gold, etc....
Basically where do I go on the web to see what how silver is linked to 458 for example , and gold is linked to 437. And also find codes for other commodities.
I really appreciate your help in advance and hope to speak soon!
Best,
Sam | 0easy
|
Title: Clear output directory between runs
Body: When you run notebook several times and have [`OutputDir`](https://runmercury.com/docs/output-widgets/outputdir/) then files from previous runs will be available in output directory. | 0easy
|
Title: [DOC] Document less common in-memory data formats
Body: The recent PR https://github.com/sktime/sktime/pull/7231 adds an API reference page for in-memory data representation.
With this, detailed documentation on the most commonly used time series machine types were also added.
However, documentation for the less common formats is still missing - we should add this.
Recipe:
1. pick one type and help document it! (one pull request per type). See below for a list of undocumented types.
2. use `get_examples` from `sktime.datatypes` to generate some examples, and look at `MTYPE_REGISTER` for a short description. You can also use `convert_to` to see how already documented types convert to the documented one.
3. add a docstring to the class in one of the `_check` modules, e.g., `sktime.datatypes._panel._check` for one of the `Panel` mtypes. The name of the class is the same as below. All modules already contain complete examples, so "follow the pattern".
To identify a missing case, see the list below.
Alternatively, you can also:
* search for classes with empty docstrings in one of the `datasets` `_check` modules, within the `_series`, `_panel`, `_hierarchical` or `_table` submodules.
* look at the "Data Format Specifications" page on the API reference ([link](https://www.sktime.net/en/latest/api_reference.html)), click through concrete representations and search for empty docstrings.
Docstrings should be completed with a description of the in-memory type. Example descriptions can be found for the more common types, in the API reference, or the aforementioned modules.
Classes currently empty:
- [x] `SeriesXArray`
- [ ] `SeriesDask`
- [ ] `SeriesPolarsEager`
- [ ] `SeriesGluontsList`
- [ ] `SeriesGluontsPandas`
- [ ] `PanelDask`
- [ ] `PanelPolarsEager`
- [x] `PanelGluontsList`
- [ ] `PanelGluontsPandas`
- [ ] `HierarchicalDask`
- [ ] `HierarchicalPolarsEager`
- [x] `TablePdDataFrame`
- [x] `TablePdSeries`
- [x] `TableNp1D`
- [ ] `TableNp2D`
- [ ] `TableListOfDict`
- [ ] `TablePolarsEager`
FYI @Abhay-Lejith and @pranavvp16, as you wrote some of the respective plug-ins - helping with the docs would be much appreciated, since you know most about these! | 0easy
|
Title: Most of the indicators returns NaN.
Body: Pandas_ta version: 0.3.14b0
Python version: Python 3.8.10
Ta-Lib is installed. Ta-Lib version: 0.4.24
Most of the indicators return Nan. Only a few indicators work.
```python
import numpy as np
import pandas as pd
import pandas_ta as ta
data=np.genfromtxt("b15.csv",delimiter=",")
df=pd.DataFrame(data,columns=["Time","Open","High","Low","Close","Volume"])
e=ta.momentum.macd(close=df["Close"])
print(e)
```
```sh
MACD_12_26_9 MACDh_12_26_9 MACDs_12_26_9
Time
1970-01-01 00:26:17.836800 NaN NaN NaN
1970-01-01 00:26:17.837700 NaN NaN NaN
1970-01-01 00:26:17.838600 NaN NaN NaN
1970-01-01 00:26:17.839500 NaN NaN NaN
1970-01-01 00:26:17.840400 NaN NaN NaN
... ... ... ...
1970-01-01 00:27:47.256300 NaN NaN NaN
1970-01-01 00:27:47.257200 NaN NaN NaN
1970-01-01 00:27:47.258100 NaN NaN NaN
1970-01-01 00:27:47.259000 NaN NaN NaN
1970-01-01 00:27:47.259900 NaN NaN NaN
[98980 rows x 3 columns]
```
Vortex works:
```python
e=ta.trend.vortex(close=df["Close"],high=df["High"],low=df["Low"])
print(e)
```
```sh
VTXP_14 VTXM_14
Time
1970-01-01 00:26:17.836800 NaN NaN
1970-01-01 00:26:17.837700 NaN NaN
1970-01-01 00:26:17.838600 NaN NaN
1970-01-01 00:26:17.839500 NaN NaN
1970-01-01 00:26:17.840400 NaN NaN
... ... ...
1970-01-01 00:27:47.256300 1.157145 0.864892
1970-01-01 00:27:47.257200 1.152978 0.886676
1970-01-01 00:27:47.258100 1.184268 0.856069
1970-01-01 00:27:47.259000 1.219873 0.828368
1970-01-01 00:27:47.259900 1.238993 0.761392
[98980 rows x 2 columns]
``` | 0easy
|
Title: Getting raw (unadjusted) intraday data
Body: Hello,
I want to get intraday data without split/dividend adjustment. How can I get raw intraday data?
From API [docs](https://www.alphavantage.co/documentation/#intraday):
> Optional: adjusted
>
> By default, adjusted=true and the output time series is adjusted by historical split and dividend events. Set adjusted=false to query raw (as-traded) intraday values.
`get_intraday` and `get_intraday_extended` methods do not contain `adjusted` argument and I assume it will be true by default.
| 0easy
|
Title: First-class function type initializes too late
Body: <!--
Thanks for opening an issue! To help the Numba team handle your information
efficiently, please first ensure that there is no other issue present that
already describes the issue you have
(search at https://github.com/numba/numba/issues?&q=is%3Aissue).
-->
## Reporting a bug
<!--
Before submitting a bug report please ensure that you can check off these boxes:
-->
- [x] I have tried using the latest released version of Numba (most recent is
visible in the release notes
(https://numba.readthedocs.io/en/stable/release-notes-overview.html).
- [x] I have included a self contained code sample to reproduce the problem.
i.e. it's possible to run as 'python bug.py'.
`numba.tests.test_function_type.TestFunctionTypeExtensions.test_wrapper_address_protocol_libm` fails when run by itself with:
```pytb
numba.core.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend)
non-precise type pyobject
During: typing of argument at /numba/numba/tests/test_function_type.py (616)
File "numba/tests/test_function_type.py", line 616:
def myeval(f, x):
^
```
User can workaround this by triggering the compiler once before the use of first-class function type. For example `jit(lambda: None)()` just to initialize everything. | 0easy
|
Title: Add providers classes for `mistralai`, `cohere`, `anthropic` and `groq`
Body: At the moment, we have the models for all of them, but since we introduced the `Provider`s classes, we missed the ones for:
- [x] MistralAI https://github.com/pydantic/pydantic-ai/pull/1118
- [ ] Cohere
- [x] Anthropic
- [x] Groq: https://github.com/pydantic/pydantic-ai/pull/1084
PR welcome. | 0easy
|
Title: Docs need different path to images
Body: https://igel.readthedocs.io/en/latest/_sources/readme.rst.txt includes a link to the assets/igel-help.gif, but that path is broken on readthedocs.
readme.rst is included as ../readme.rst in the sphinx build.
The gifs are in asses/igel-help.gif
The sphinx build needs to point to the asset directory, absolutely:
.. image:: /assets/igel-help.gif
I haven't made a patch, because I haven't tested this. | 0easy
|
Title: OpenAI spec non-streaming response contains multiple `space` tokens
Body: While serving a Llama-3 8B instruct LLM using OpenAI spec, the endpoint returns multiple spaces between words. It is because Llama generates contains space in the generated token.
We should use empty string to concat [OpenAI non-streaming chat content](https://github.com/Lightning-AI/LitServe/blob/81fffb66c6b8a95562317943e7de77dc1f2a5faf/src/litserve/specs/openai.py#L359-L364)
```
for i, streaming_response in enumerate(pipe_responses):
msgs = []
tool_calls = None
async for chat_msg in streaming_response:
chat_msg = json.loads(chat_msg)
logger.debug(chat_msg)
chat_msg = ChatMessage(**chat_msg)
msgs.append(chat_msg.content)
if chat_msg.tool_calls:
tool_calls = chat_msg.tool_calls
# Is " " correct choice to concat with?
content = " ".join(msgs)
msg = {"role": "assistant", "content": content, "tool_calls": tool_calls}
```
here is the llama response:
<img width="543" alt="image" src="https://github.com/Lightning-AI/LitServe/assets/21018714/bc5c7c67-8f7a-4a98-a59b-0bcb29347df5">
| 0easy
|
Title: Benchmarking
Body: Can you please add some performance numbers to the main project docs indicating inference latency running some common hardware options e.g. AWS p2, GCP gpu instance, CPU inference, Raspbery pi, etc. | 0easy
|
Title: Marketplace - Reduce the margin under "Featured agents" and the Featured cards, reduce it to 24px
Body: Reduce the margin under "Featured agents" and the Featured cards, reduce it to 24px
<img width="1404" alt="Screenshot 2024-12-13 at 16 55 17" src="https://github.com/user-attachments/assets/2f2bd694-0847-449f-8139-3eb722dc8a38" /> | 0easy
|
Title: [DOC] Add examples for QuerySearch in SimilaritySearch
Body: ### Describe the issue linked to the documentation
just having a play with similarity search, some examples in the docstring would be helpful.
### Suggest a potential alternative/fix
_No response_ | 0easy
|
Title: Old gradebook.db format error is not propagated to Formgrader interface
Body: When a user has a version of gradebook.db that has not been upgraded to the latest version,
Formgrader will hang when it tries to open the database file.
The error is only visible in the log output and not propagated to the web interface. | 0easy
|
Title: Replace `freeze.py` with `conda-lock`
Body:
### Proposed change
We use [freeze.py](https://github.com/jupyter/repo2docker/blob/master/repo2docker/buildpacks/conda/freeze.py) to generate fully specified `environment.yml` files out of our `environment.yml` files. It creates a docker image, installs everything, and then lists out the installed packages.
Instead, we could use [conda-lock](https://pypi.org/project/conda-lock/). This resolves `environment.yml` files to fully specified lock files we can use. It doesn't install the packages either, so should be much faster.
### Alternative options
Do nothing! Our current solution works well.
### Who would use this feature?
Developers of repo2docker who modify `environment.yml` files. Currently the freezing process can take a while - this would make it faster.
### How much effort will adding it take?
We need to investigate if conda-lock will actually do what we want. After that, implementing it shouldn't be *too* difficult - a couple hours of work for someone very familiar with the codebase, maybe 8-12h for someone who isn't.
### Who can do this work?
1. Decent python skills
2. A vague understanding of how conda's environment.yml files work
| 0easy
|
Title: [FEATURE] Include category tabs (all/images/news/etc) in header
Body: **Describe the bug**
The image tab has a different layout (CSS?) from the other tabs like "all", "videos", "news". The text of the menu is bigger on the image tab.
Moreover, in the image tab I'm not able to access maps nor books, those links aren't displayed.


Is this intended? It's a bit disturbing to have a sudden change of layout like this.
**To Reproduce**
Steps to reproduce the behavior:
1. Do a search.
2. Click on the "image" link in the menu.
3. See different layout from the previous page
**Deployment Method**
- [ ] Heroku (one-click deploy)
- [ ] Docker
- [ ] `run` executable
- [x] pip/pipx
- [ ] Other: [describe setup]
**Version of Whoogle Search**
- [x] Latest build from [source] (i.e. GitHub, Docker Hub, pip, etc)
- [ ] Version [version number]
- [ ] Not sure
**Desktop (please complete the following information):**
- OS: ArchLinux
- Browser: Firefox
- Version: 88.0 (64-bit)
**Additional context**
Add any other context about the problem here.
| 0easy
|
Title: Possibility to continue execution after WHILE limit is reached
Body: Hello everyone,
I'm creating this issue to discuss about a potential evolution of the WHILE loop behaviour when the limit is reached. As a reminder, in the current implementation of the WHILE loop, when the limit is reached, execution fails with the following message `WHILE loop was aborted because it did not finish within the limit of 5 iterations. Use the 'limit' argument to increase or remove the limit if needed.` (if the limit is 5 iterations).
In this PR #4561, I suggested to add a third argument in order to be able to set a custom error message when the limit is reached. @pekkaklarck suggested to use this third argument to control what to do if the limit is reached and, for example, allow execution to continue even if the limit is reached.
I have been thinking about it and as a result, I imagined the following scenarios :
```
*** Test Cases ***
Continue after limit is reached
[Documentation] The execution will continue after the WHILE loop even if the limit is reached
WHILE True limit=5 limit_reached_behavior=CONTINUE
Log Test
END
Fail with custom error message if limit is reached
[Documentation] The execution will fail with custom error message if the limit is reached
WHILE True limit=5 limit_reached_behavior=Fail Custom error message
Log Test
END
Run keyword and continue if limit is reached
[Documentation] The execution will execute the keyword and continue execution if the limit is reached
WHILE True limit=5 limit_reached_behavior=Example keyword
Log Test
END
*** Keywords ***
Example keyword
Log Example
```
If `limit_reached_behavior` equals `CONTINUE`, the execution simply continues if the limit is reached.
If `limit_reached_behavior` doesn't equal `CONTINUE`, the keyword given in `limit_reached_behavior` is executed.
What do you think of such scenarios ? Feel free to suggest other scenarios !
I am not very comfortable with the idea of using `CONTINUE` to continue execution if the limit is reached because the string `CONTINUE` is already used to stop executing the current loop iteration and continues to the next one.
It would also be very interesting to discuss what the option name should be.
Such scenarios also beg the following question : Where and How should we display the keyword executed after reaching the limit in `log.html` ?
Thank you very much for taking the time to read this issue ! | 0easy
|
Title: integration test: Complete the tutorial.
Body: Write a playwright test for following the tutorial on the frontend.
You can read more about testing here: https://dev-docs.agpt.co/platform/contributing/tests/
| 0easy
|
Title: Weakness: duplicated path concatenation in the toolkit/dataset/vot.py
Body: As the title describes, there exists two times path concatenations for the member variable named `img_names` belonged to the `VOTVideo` class.
One of them occurs at `line 18` in `toolkit/dataset/video.py` , and the other one is at `line 53` in `toolkit/dataset/vot.py` if the variable `load_img` is set to `False`.
Here are some examples.
absolute path
```Python
>>> a = '/home/username/test_dir'
>>> b = '/home/username/test_dir/test.py'
>>> c = os.path.join(a, b)
>>> print(c)
/home/username/test_dir/test.py # valid path
```
relative path
```Python
>>> a = './test_dir'
>>> b = './test_dir/test.py'
>>> c = os.path.join(a, b)
>>> print(c)
./test_dir/./test_dir/test.py # invalid path
```
Consequently, if users pass a relative path to the `toolkit.datasets.DatasetFactory.create_dataset` method, their programs will crash. So, please fix this weakness timely. | 0easy
|
Title: AttributeError: 'TemplateResponse' object has no attribute 'charset'
Body: Django==1.7.6
splinter==0.7.2
lxml==3.4.2
```
# coding=utf-8
from django.test import LiveServerTestCase
from django.contrib.auth import get_user_model
User = get_user_model()
from splinter import Browser
class AdminTestCase(LiveServerTestCase):
def setUp(self):
User.objects.create_superuser(
username='admin',
password='admin',
email='admin@example.com'
)
self.splinter = Browser('django')
super(AdminTestCase, self).setUp()
def test_create_user_splinter(self):
count_1 = User.objects.all().count()
self.splinter.visit('%s%s' % (self.live_server_url, "/admin/"))
self.splinter.find_by_css("#id_username").fill("admin")
self.splinter.find_by_css("#id_password").fill("admin")
button = self.splinter.find_by_css('[type="submit"]')
self.assertNotEqual(button, None)
button.click()
self.splinter.visit('%s%s' % (self.live_server_url, "/admin/app/user/add/"))
self.splinter.find_by_css("#id_username").fill("test")
self.splinter.find_by_css("#id_password1").fill("test")
self.splinter.find_by_css("#id_password2").fill("test")
self.splinter.find_by_css("#user_form [type=submit]").first.click()
count_2 = User.objects.all().count()
self.assertEqual(count_1, count_2 - 1)
```
```
Traceback (most recent call last):
File "src/app/tests/test_splinter.py", line 23, in test_create_user_splinter
self.splinter.find_by_css("#id_username").fill("admin")
File "env/lib/python2.7/site-packages/splinter/driver/djangoclient.py", line 174, in find_by_css
return self.find_by_xpath(xpath, original_find="css", original_selector=selector)
File "env/lib/python2.7/site-packages/splinter/driver/djangoclient.py", line 177, in find_by_xpath
html = self.htmltree
File "env/lib/python2.7/site-packages/splinter/driver/djangoclient.py", line 144, in htmltree
self._html = lxml.html.fromstring(self.html)
File "env/lib/python2.7/site-packages/splinter/driver/djangoclient.py", line 154, in html
return self._response.content.decode(self._response.charset)
AttributeError: 'TemplateResponse' object has no attribute 'charset'
```
| 0easy
|
Title: Confusing error message when using `rebot` and output file contains no tests
Body: To reproduce:
```
$ touch empty.robot
$ robot --run-empty-suite empty.robot
...
$ rebot output.xml
[ ERROR ] Suite 'Empty' contains no tests after model modifiers.
Try --help for usage information.
```
Mentioning model modifiers is confusing to users, especially when they didn't use them and probably don't even know what they are. The message is fine if that part is simply removed.
Originally reported and also fixed in PR #5307 by @jnhyperion. A test for the fix needs to be added. | 0easy
|
Title: `lineno` of keywords executed by `Run Keyword` variants is `None` in dry-run
Body: When executing a dryrun and having any kind of `Run Keyword` variant, the child keyword will have wrong type in ListenerV2.
`lineno` should be `int` but in that specific case it is None.
At least according to the typing of `StartKeywordAttributes` and the other situations, where a line number is not available.
Example:
```robotframework
*** Settings ***
Library Listener.py
*** Test Cases ***
Reproduction
Run Keyword Log this has the wrong lineno
```
```python
import json
from robot.api import logger
class Listener:
def __init__(self):
self.ROBOT_LIBRARY_LISTENER = self
self.ROBOT_LISTENER_API_VERSION = 2
def _start_keyword(self, name, attributes):
logger.console(f"\nKeyword started: {name}")
logger.console(f" attributes: \n{json.dumps(attributes, indent=2)}")
def kw(self):
pass
``` | 0easy
|
Title: feature: proper support of matplotlib like in Jupyter
Body: In https://github.com/widgetti/solara/commit/2b321eeff9f5e288a92d401b968d10671c22c959 we added support to notebook rendering for matplotlib using `pylabtools.select_figure_formats(shell, ["png"])`. This will make `display(fig)` work. Ideally, this works out of the box, without having a matplotlib dependency.
Would be great if someone can figure out how. | 0easy
|
Title: [DOCS] Multi-gpu demo
Body: Demo of multi-GPU with RAPIDS.ai + Graphistry
- [ ] embeddable in Graphistry getting started dashboard: smart defaults listed
- [ ] show both out-of-core + multi-gpu
- [ ] encapsulated: no need to register etc. to get data
- [ ] tech: dask_cudf, bsql, & cuGraph. OK if split into multiple notebooks
- [ ] ultimately: x-post w/ rapids+blazing repos?
The taxi dataset is interesting. Maybe run that -> UMAP / k-nn -> viz?
Or maybe something better that is open access and more inherently graph-y, like app logs?
| 0easy
|
Title: Add notebook example for torchscript example
Body: With #225 we added support for torchscript. It would be good to have a notebook showcasing this new feature. | 0easy
|
Title: [BUG] readthedocs build deprecations
Body: Looks like we need to modify the readthedocs yml as per below:
```
Hello,
--
You are receving this email because your Read the Docs project is impacted by an upcoming deprecation.Read the Docs used to pre-install common scientific Python packages like scipy, numpy, pandas, matplotlib and others at system level to speed up the build process. However, with all the work done in the Python ecosystem and the introduction of "wheels", these packages are a lot easier to install via pip install and these pre-installed packages are not required anymore. If you have Apt package dependencies, they can be installed with build.apt_packages.With the introduction of our new "Ubuntu 20.04" and "Ubuntu 22.04" Docker images, we stopped pre-installing these extra Python packages and we encouraged users to install and pin all their dependencies using a requirements.txt file. We have already stopped supporting "use system packages" on these newer images.We are removing the "use system packages" feature on August 29th. Make sure you are installing all the required dependecies to build your project's documentation using a requirements.txt file and specifying it in your .readthedocs.yaml.Here you have an example of the section required on the .readthedocs.yaml configuration file:python: install: - requirements: docs/requirements.txt The content of docs/requirements.txt would be similar to:scipy==1.11.1 numpy==1.25.2 pandas==2.0.3 matplotlib==3.7.2 We are sending you this email because you are a maintainer of the following projects that are affected by this removal. Either using "Use system package" checkbox in the Admin UI, or the config key python.sytem_packages or python.use_system_site_packages in your .readthedocs.yaml file:graphistry-helmpygraphistryRead more about this in our Reproducible builds guide for more details.
Keep documenting,Read the Docs
Hello,
You are receving this email because your Read the Docs project is impacted by an upcoming deprecation.
Read the Docs used to pre-install common scientific Python packages like scipy, numpy, pandas, matplotlib and others at system level to speed up the build process. However, with all the work done in the Python ecosystem and the introduction of "wheels", these packages are a lot easier to install via pip install and these pre-installed packages are not required anymore. If you have Apt package dependencies, they can be installed with [build.apt_packages](https://docs.readthedocs.io/en/stable/config-file/v2.html#build-apt-packages).
With the introduction of our new "Ubuntu 20.04" and "Ubuntu 22.04" Docker images, we stopped pre-installing these extra Python packages and we encouraged users to install and pin all their dependencies using a requirements.txt file. We have already stopped supporting "use system packages" on these newer images.
We are removing the "use system packages" feature on August 29th. Make sure you are installing all the required dependecies to build your project's documentation using a requirements.txt file and specifying it in your .readthedocs.yaml.
Here you have an example of the section required on the .readthedocs.yaml configuration file:
python:
install:
- requirements: docs/requirements.txt
The content of docs/requirements.txt would be similar to:
scipy==1.11.1
numpy==1.25.2
pandas==2.0.3
matplotlib==3.7.2
We are sending you this email because you are a maintainer of the following projects that are affected by this removal. Either using "Use system package" checkbox in the Admin UI, or the config key python.sytem_packages or python.use_system_site_packages in your .readthedocs.yaml file:
[graphistry-helm](https://readthedocs.org/projects/graphistry-helm/)
[pygraphistry](https://readthedocs.org/projects/pygraphistry/)
Read more about this in our [Reproducible builds](https://docs.readthedocs.io/en/stable/guides/reproducible-builds.html) guide for more details.
Keep documenting,
Read the Docs
``` | 0easy
|
Title: Fix style issue `Inconsistent return statements PYL-R1710`
Body: Fix style issue `Inconsistent return statements PYL-R1710`: https://deepsource.io/gh/scanapi/scanapi/issue/PYL-R1710/occurrences


Child of https://github.com/scanapi/scanapi/issues/405 | 0easy
|
Title: api: add throttling scope and rate to throttled response
Body: e.g.
```
{"detail":"Request was throttled (dns_api_write_rrsets, 30/h). Expected available in 3109 seconds."}
``` | 0easy
|
Title: Update third party worflows in the gh actions.
Body: /kind feature
**Describe the solution you'd like**
[A clear and concise description of what you want to happen.]
Currently, we use the outdated version of workflows. So we need to upgrade the following workflows in the gh actions:
- [x] docker/setup-qemu-action
- [x] docker/setup-buildx-action
- [x] docker/metadata-action
- [x] docker/build-push-action
- [x] docker/login-action
- [x] actions/checkout
- [x] azure/setup-kubectl
- [x] medyagh/setup-minikube
- [x] actions/setup-python
- [x] actions/setup-node
- [x] actions/setup-g
**Anything else you would like to add:**
[Miscellaneous information that will assist in solving the issue.]
---
<!-- Don't delete this message to encourage users to support your issue! -->
Love this feature? Give it a 👍 We prioritize the features with the most 👍
| 0easy
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.