text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: Add IQR Anomaly Detector
Body: **Is your feature request related to a current problem? Please describe.**
I'd like to detect outliers using the Interquartile range (IQR) method, i.e. an outlier is a point that lies more than 1.5 * IQR above(below) the upper(lower) quartile
**Describe proposed solution**
Similar to the QuantileDetector but use calculation IQR = (Q3-Q1) where Q1 and Q3 are the 25th and 75th quartiles. Then set the upper fence at Q3 + 1.5*IQR and the lower fence to Q1 - 1.5*IQR
**Describe potential alternatives**
I'm not aware of any alternative.
**Additional context**
| 0easy
|
Title: Run xonsh with `-DVAR VAL`: ValueError: not enough values to unpack
Body: ```xsh
xonsh --no-rc --no-env -DCOLOR_OUTPUT 0 # space between -DVAR and VALUE
```
```xsh
Traceback (most recent call last):
File "/Users/pc/.local/xonsh-env/lib/python3.12/site-packages/xonsh/main.py", line 478, in main
args = premain(argv)
^^^^^^^^^^^^^
File "/Users/pc/.local/xonsh-env/lib/python3.12/site-packages/xonsh/main.py", line 420, in premain
env.update([x.split("=", 1) for x in args.defines])
File "<frozen _collections_abc>", line 987, in update
ValueError: not enough values to unpack (expected 2, got 1)
Xonsh encountered an issue during launch
```
## For community
β¬οΈ **Please click the π reaction instead of leaving a `+1` or π comment**
| 0easy
|
Title: Mutation argument `interfaces` does not work
Body: Mutations implement the same options as ObjectType (because they create ObjectType under the hood), but using `interfaces` meta argument does not have the expected effect of applying the GraphQL Interfaces to the Mutation Payload ObjectType.
Mutation should be able to apply `interfaces` Meta argument to have the payload object type implement the desired interfaces.
See this thread for more discussion:
_Originally posted by @jkimbo in https://github.com/graphql-python/graphene/pull/971_ | 0easy
|
Title: Remove all print statements
Body: Replace woth proper logging for the worker | 0easy
|
Title: [DOC] Update Partitional clustering notebook
Body: ### Describe the issue linked to the documentation
the notebook
https://www.aeon-toolkit.org/en/stable/examples/clustering/partitional_clustering.html
does not list all the distances we now have and could be usefully updated (examples/clustering/partitional_clustering.ipynb
### Suggest a potential alternative/fix
_No response_ | 0easy
|
Title: Add unit tests for `test_get_specs.py` and `test_run.py`
Body: [test_get_specs.py](https://github.com/scanapi/scanapi/blob/main/tests/unit/tree/endpoint_node/test_get_specs.py) and [test_run.py](https://github.com/scanapi/scanapi/blob/main/tests/unit/tree/endpoint_node/test_run.py) both require unit tests which is also being flagged in our [static analysis](https://deepsource.io/gh/scanapi/scanapi/issue/PYL-W0511/occurrences?page=1).
[ScanAPI Writing Tests Documentation](https://github.com/scanapi/scanapi/wiki/Writing-Tests)
| 0easy
|
Title: mc_dropout with predict_likelihood_parameters
Body: Hi, thanks for the prompt response regarding issue #2097 . I'm now able to utilize `mc_dropout` in `historical_forecasts`.
My current objective is to incorporate `mc_dropout` for obtaining epistemic uncertainty and `predict_likelihood_parameters` for acquiring aleatoric uncertainty. However, when setting `predict_likelihood_parameters` to `True`, I encounter an issue where I'm unable to use the sample method. Consequently, I'm limited to obtaining only one result with each call to `historical_forecasts` with `predict_likelihood_parameters=True` and `mc_dropout=True`.
I suspect that the solution might involve distinguishing the sample generation process by using mc_dropout and sample generation process based on the fitted distribution. Now they are using the same `num_samples` parameter. | 0easy
|
Title: message badge position bug
Body: The code is in the `MessageTableEntry.tsx` file

| 0easy
|
Title: Fix Crawler.request_fingerprinter typing
Body: It should be `scrapy.utils.request.RequestFingerprinterProtocol`. | 0easy
|
Title: How can I contribute?
Body: Hi,
I need to contribute to a project for a class assignment.
Is there anything in particular you need contribution to?
Kind regards,
Hugo | 0easy
|
Title: Give more visibility to Python API docs
Body: Users are having a hard time finding the options available to customize the tasks declared in `pipeline.yaml`.
For example, one user asked how to select a custom kernel to execute a notebook task. The problem is there isn't a good connection between the tutorials and our Python API (people using pipeline.yaml do not directly use the Python API, but that's what powers it under the hood). So a few comments to improve the experience:
- [ ] Tutorials should link to the Python API in relevant cases. example. the [SQL pipelines](https://docs.ploomber.io/en/latest/get-started/sql-pipeline.html) should link to the relevant sections in the [Python API](https://docs.ploomber.io/en/latest/api/python_api.html). We should repeat that for all tutorials | 0easy
|
Title: FastAPI RegisterTortoise cannot add exception handlers during FastAPI lifespan
Body: **Describe the bug**
The RegisterTortoise class cannot add exception handlers during FastAPI lifespan
**To Reproduce**
Use this snippet:
```python
from contextlib import asynccontextmanager
import uvicorn
from fastapi import FastAPI
from tortoise import Model, fields
from tortoise.contrib.fastapi import RegisterTortoise
from tortoise.exceptions import DoesNotExist
class Test(Model):
id = fields.IntField(primary_key=True)
name = fields.TextField()
@asynccontextmanager
async def lifespan(app: FastAPI):
async with RegisterTortoise(app=app, db_url='sqlite://:memory:', modules={"models": ["__main__"]}, add_exception_handlers=True):
yield
app = FastAPI(lifespan=lifespan)
@app.get("/")
async def raise_error():
raise DoesNotExist("Dummy")
if __name__ == "__main__":
uvicorn.run(app)
```
Perform a GET on /:
```bash
$ curl localhost:8000
Internal Server Error
```
**Expected behavior**
Fix by explicitly adding exception handler before FastAPI init:
```python
from contextlib import asynccontextmanager
import uvicorn
from fastapi import FastAPI, Request
from fastapi.responses import JSONResponse
from tortoise import Model, fields
from tortoise.contrib.fastapi import RegisterTortoise
from tortoise.exceptions import DoesNotExist
class Test(Model):
id = fields.IntField(primary_key=True)
name = fields.TextField()
@asynccontextmanager
async def lifespan(app: FastAPI):
async with RegisterTortoise(app=app, db_url='sqlite://:memory:', modules={"models": ["__main__"]}, add_exception_handlers=True):
yield
app = FastAPI(lifespan=lifespan)
@app.get("/")
async def raise_error():
raise DoesNotExist("Dummy")
@app.exception_handler(DoesNotExist)
async def doesnotexist_exception_handler(request: Request, exc: DoesNotExist):
return JSONResponse(status_code=404, content={"detail": "Not Found"})
if __name__ == "__main__":
uvicorn.run(app)
```
Perform a GET on /, and receive a message handled by the exception handler.
```bash
$ curl localhost:8000
{"detail":"Not Found"}
```
| 0easy
|
Title: Revise the graphics used in README for `vizro-ai` and `vizro-core` so they render better in PyPI
Body: This is a good issue for someone with design knowledge or who knows how to optimally set out graphics to work on different GitHub themes and on PyPI. It touches docs but doesn't change any wording. No knowledge of Vizro is required.
https://pypi.org/project/vizro/
<img width="874" alt="image" src="https://github.com/mckinsey/vizro/assets/5180475/4bd85c1f-f389-48e4-be14-db96583f80c9">
and https://pypi.org/project/vizro-ai/
<img width="833" alt="image" src="https://github.com/mckinsey/vizro/assets/5180475/8a54cb7f-e053-4a8b-baf7-5122cc220622">
both look a bit rubbish right now because of the graphics choice in our package README files.
We need to revise these to work better, or make separate choices for PyPI. This is a potential good first issue for a contributor to Vizro if they have previous experience with PyPI page submissions. The remit is that the graphic/GIF should work in light/dark mode and on GitHub plus PyPI. | 0easy
|
Title: ζζ¬δΈε¦ζζζ°εθ―»δΈεΊζ₯
Body: | 0easy
|
Title: [Serve] Regression by HTTPS Support: Double Protocol in Endpoint URL
Body: The PR #3380 introduces the following change:
https://github.com/skypilot-org/skypilot/blob/735ce299d17fc1c0f54afd364dfec2e5b9ff6b46/sky/serve/core.py#L310-L321
But when running sky serve up with controller hosted on Kubernetes, this change causes the endpoint to be rendered incorrectly. Simply run `sky serve up examples/serve/http_server/task.yaml -y -n sky-service-6c01 --cloud gcp`, and at the end of the logs there would be:
```console
π Useful Commands
βββ To check service status: sky serve status sky-service-6c01 [--endpoint]
βββ To teardown the service: sky serve down sky-service-6c01
βββ To see replica logs: sky serve logs sky-service-6c01 [REPLICA_ID]
βββ To see load balancer logs: sky serve logs --load-balancer sky-service-6c01
βββ To see controller logs: sky serve logs --controller sky-service-6c01
βββ To monitor the status: watch -n10 sky serve status sky-service-6c01
βββ To send a test request: curl http://http://localhost:30060/skypilot/default/sky-serve-controller-e2dc6f0f-e2dc6f0f/30001
```
This happens because the `socket_endpoint` may already include the protocol (e.g., if itβs an instance of `HTTPSocketEndpoint`, that's exactly the case of Kubernetes, where `_query_ports_for_ingress` returns an list of `HTTPEndpoint` and `HTTPSEndpoint`). Adding `f'{protocol}://'` results in a double protocol prefix (`http://http://`).
This could potentially break the 'http_server' smoke tests in Kubernetes.
Cc'ing @cblmemo | 0easy
|
Title: Avoid documents indexing in search engine (until we allow intentionally publishing pages)
Body: ## Bug Report
**Problematic behavior**
Some documents are available publicly (without being logged) and may thus end-up being indexed by search engine.
**Expected behavior/code**
We don't want this to happen until we implement the possibility to "publish" a page intentionally.
**Possible Solution**
Documents should be flagged as "noindex".
Add a meta tag in the <head> section of the page (See https://developers.google.com/search/docs/crawling-indexing/block-indexing for more information):
```
<meta name="robots" content="noindex">
```
It will have to be removed on published documents when we will allow it. | 0easy
|
Title: Refactoring: commands cache
Body: We need to refactor commands_cache:
* Do not create one `_cmds_cache` list based on aliases and list of files from PATHs because any change will force rebuilding the cache. Implement layers approach instead of this. Aliases - first layer, files - second layer. Now you can update first or second layer without rebuilding everything. The get function just overlap layers on the fly to extract single row.
* Use list of files from Windows/System32 as a permanent layer and do not re-read it. The System32 diretory is huge and persistent.
## For community
β¬οΈ **Please click the π reaction instead of leaving a `+1` or π comment**
| 0easy
|
Title: Remove Null Option from status.Task.tags (M2M) and fix warning
Body: **Describe the bug**
On running the django app, the terminal shows the following warning
```
WARNINGS:
status.Task.tags: (fields.W340) null has no effect on ManyToManyField.
```
**To Reproduce**
1. Run the django server
**Expected behavior**
The error should be avoided. If null option has no effect, it should be removed.
**Screenshots**
 | 0easy
|
Title: Move to a single source of truth for docs β Remove duplicate info from readme
Body: autogpt_platform\\backend\\README.advanced.md and autogpt_platform\\backend\\README.md. We should just point people to the docs directory (docs/platform/advanced_setup|getting-started) in these. Check the content is all in that file, and the normal getting started, then remove these two files and replace with a link to the docs site β the dev-docs.agpt.co and docs.agpt.co. Call out both and their branch match for released master vs dev branch | 0easy
|
Title: Access to absolute path of schema file through client
Body: ## Problem
I want to locate the schema file my client is basing itself upon through the client, enabling me to find the absolute path of the schema.prisma file through python without much hassle. This would make debugging easier.
## Suggested solution
A function that returns the a path object pointing to the schema.prisma file the client is using.
## Alternatives
I'm not too sure, the generator could as well store the schema path in an environment variable while generation is occuring.
## Additional context
My actual problem isn't related to the client, but adding it would ease debugging. My libraries need to find a config file stored in the same directory as the schema, and prisma is my only dependency that generates a client based on a file, so my first thought was to see if prisma had such a function. (I can't use an absolute path because in production, it could be installed to either a global namespace or a virtual one, so i can't bank on knowing where my libraries will be)
Thanks for the good work Robert, I'm loving this client, it's my go to database solution for any project that needs one
| 0easy
|
Title: Add hover information
Body: Nice package! It'd be great if hover information was included | 0easy
|
Title: Missing Diff section in unstructured profilers
Body: **Please provide the issue you face regarding the documentation**
Need to mimic the diff section in structured_profilers: https://github.com/capitalone/DataProfiler/blob/main/examples/structured_profilers.ipynb
Add to unstructured_profilers example | 0easy
|
Title: Add index selection to JointPlotVisualizer
Body: The joint plot visualizer shows the relationship between a single feature in `X` with the target `y`. Right now the choices are for a user to pass in a name for a column to select from a DataFrame or to pass in a single column array. We should expand this to allow the user to pass an index to ensure that different data types can be passed to the visualizer.
### Proposal/Issue
Currently, the plot accepts as hyperparameters `feature` (soon to be `column`) and `target`. Both `feature`/`column` and `target` act as x and y labels in the visualizer respectively (and in fact, `target` has no purpose other than to act as a label). The `feature`/`column` attribute, however, acts as a selection mechanism to describe what to plot against the target from the 2D array, `X` passed into fit. Right now the options are:
1. Pass in a string to identify a column in a DataFrame
2. Pass in None and a single column `X`
We should expand this to include:
1. Pass in an int - select the column by index and label with the id
2. If `target` is specified but not `y`, then use `target` similarly to `feature`/`column` to select from `X`
3. Test use cases with matrix, python list, structured array, and data frame
**Caveats**: I think 2. requires some discussion, which we should undertake during the PR. I'm not sure this makes a lot of sense, but would be useful. Potentially changing the names of the visualizer parameters would help make this behavior more clear.
| 0easy
|
Title: The BO notebook still fails
Body: https://scikit-optimize.github.io/notebooks/bayesian-optimization.html
CC: @MechCoder
Again, I think we should fix `n_restarts_optimizer` in `GaussianProcessRegressor` to a larger value once and for all. | 0easy
|
Title: Add badges to README
Body: - [x] PyPI version badge
- [ ] conda-forge version badge (Depends on #367)
- [x] CI tests workflow badge
---
Probably should also add instructions for the "Uses CCDS" badge to the README. https://drivendata.co/blog/ccds-v2#ccds-badge | 0easy
|
Title: [ENH] Radial Basis Functions
Body: Add radial basis functions for Time series transformations based on:
https://github.com/koaning/scikit-lego/blob/main/sklego/preprocessing/repeatingbasis.py | 0easy
|
Title: [Feature request] Add apply_to_images to RandomFog
Body: | 0easy
|
Title: [DOCS] preprocessing API gone
Body: Probably when we preprocessing part of the library got split up into a folder we forgot to make sphinx aware of it. Docs look like this now;

| 0easy
|
Title: Option to define snapshot directory name
Body: **Is your feature request related to a problem? Please describe.**
I prefer to name my snapshot directory `snapshots` instead of `__snapshots__`. I feel the underscores are unnecessary noise.
**Describe the solution you'd like**
A CLI option `--snapshot-dirname <name>`
**Describe alternatives you've considered**
None
**Additional context**
In JS, I use vitest's [resolveSnapshotPath](https://vitest.dev/config/#resolvesnapshotpath) option for the same, it is a bite more flexible as it allows full customization of the path, but I guess I would be happy with a static option just to define the directory name. | 0easy
|
Title: Add Finance Notebook to Provide Examples of convert_currency() and inflate_currency()
Body: # Brief Description
I'd like to write a notebook that provides a very brief example of how to use the finance submodule's convert_currency() and inflate_currency() methods. The notebook would read in a single data frame (or create a simple example data frame in place), and then call on the two aforementioned methods to provide examples of their use.
This notebook would likely cover the following pyjanitor functions:
- finance.convert_currency()
- finance.inflate_currency()
# Dataset
This notebook would be focused on showing the functionality and results of the convert_currency() and inflate_currency() methods; therefore, it may just be easiest to create a very simple/minimal dataframe with random numeric values that we assume represent a given currency (e.g., USD) and dollar year.
| 0easy
|
Title: Add "changed" timestamp for RRSets
Body: As per discussion in #331: we currently record when an RR set was last touched, i.e. updates including no-op updates. It would be nice to have the timestamp of the last actual change. | 0easy
|
Title: [Bug]: Incorrect pcolormesh when shading='nearest' and only the mesh data C is provided.
Body: ### Bug summary
The coloring of meshgrid of `plt.pcolormesh(C, shading='nearest')` is inconsistent with other types of shading, e.g., 'auto', 'flat', 'gouraud'.
### Code for reproduction
```Python
import numpy as np
import matplotlib.pyplot as plt
x, y = np.mgrid[0:11, 0:11]
C = x + y
plt.figure(figsize=(20, 5))
plt.subplot(151)
plt.pcolormesh(C, shading='nearest') # Incorrect Plot
plt.title('shading=nearest, without x, y')
plt.subplot(152)
plt.pcolormesh(x, y, C, shading='nearest') # Expected Plot
plt.title('shading=nearest, with x, y')
plt.subplot(153)
plt.pcolormesh(C, shading='auto')
plt.title('shading=auto, without x, y')
plt.subplot(154)
plt.pcolormesh(C, shading='flat')
plt.title('shading=flat, without x, y')
plt.subplot(155)
plt.pcolormesh(C, shading='gouraud')
plt.title('shading=gouraud, without x, y')
plt.show()
```
### Actual outcome

### Expected outcome

### Additional information
This bug happens when only the mesh data C is provided, and using `shading = 'nearest'`.
### Operating system
_No response_
### Matplotlib Version
3.9.2
### Matplotlib Backend
_No response_
### Python version
_No response_
### Jupyter version
_No response_
### Installation
None | 0easy
|
Title: Overriding `formats`, `export_formats`, `import_formats` not documented.
Body: I wanted to restrict valid formats differently for different models.
By examining the code I discovered that I could set `formats`, `export_formats`, or `import_formats` as attrs on the admin to control the allowed formats.
This is not documented.
I am not sure if this was an intended feature, but it is useful enough that I recommend it be promoted to such, and documented.
| 0easy
|
Title: Baseline model raise exception when using float32 value
Body: At 'Explain' mode, for regression task.
Other models complete training successfully.
errors.md file content:
`Error for 1_Baseline
Object of type 'float32' is not JSON serializable Traceback (most recent call last): File "/home/moshe/.local/lib/python3.6/site-packages/supervised/base_automl.py", line 970, in _fit trained = self.train_model(params) File "/home/moshe/.local/lib/python3.6/site-packages/supervised/base_automl.py", line 312, in train_model mf.save(model_path) File "/home/moshe/.local/lib/python3.6/site-packages/supervised/model_framework.py", line 395, in save fout.write(json.dumps(desc, indent=4)) File "/usr/lib/python3.6/json/init.py", line 238, in dumps **kw).encode(obj) File "/usr/lib/python3.6/json/encoder.py", line 201, in encode chunks = list(chunks) File "/usr/lib/python3.6/json/encoder.py", line 430, in _iterencode yield from _iterencode_dict(o, _current_indent_level) File "/usr/lib/python3.6/json/encoder.py", line 404, in _iterencode_dict yield from chunks File "/usr/lib/python3.6/json/encoder.py", line 437, in _iterencode o = _default(o) File "/usr/lib/python3.6/json/encoder.py", line 180, in default o.class.name) TypeError: Object of type 'float32' is not JSON serializable` | 0easy
|
Title: Option to create code task cells should be disabled
Body: > Regarding the drop down menu, yes, I think it would be better to only have the "task cell" option visible for markdown cells.
[The documentation](https://nbgrader.readthedocs.io/en/stable/user_guide/creating_and_grading_assignments.html#manually-graded-task-cells) asserts this too "If you select the βManually graded taskβ option (available for markdown cells), the nbgrader extension will ..." However, the "Manually graded task" option seems to be available for code cells as well:

I don't really care either way. It's just that I'm implementing a hopefully 100% compatible version of nbgrader from scratch for https://cocalc.com, and I need to know what to implement...
_Originally posted by @williamstein in https://github.com/jupyter/nbgrader/pull/984#issuecomment-539255861_
cc @danielmaitre --- would disabling this option break your use case? | 0easy
|
Title: Using command line arguments to override values in settings file
Body: Hi, Im trying to use command line arguments to override values in settings file something similar to spring boot way to use java properties. Is it possible with dynaconf? | 0easy
|
Title: Add supports_multivariate to ForecastingModel
Body: **Is your feature request related to a current problem? Please describe.**
It is dificult to determine whether a model supports multivariate time series or only univariate ones without building a dictionary of this.
**Describe proposed solution**
`ForecastingModel` gets a new attribute `supports_multivariate`. This is in line with `supports_past_covariates` etc...
**Describe potential alternatives**
Model via a superclass. But this would complicate things unnessesarily.
**Additional context**
This is cool if iterating over multiple models and evaluating their performance (semi-)automatically. This would allow to generate table contents similar to [this](https://unit8co.github.io/darts/index.html#forecasting-models) when reporting on models behaviour (error, ...). | 0easy
|
Title: Error when replacing file on Windows
Body: ```pytb
====================================================== DAG build failed ======================================================
----- NotebookRunner: fit -> MetaProduct({'model': File('products\\model.pickle'), 'nb': File('products\\report.html')}) -----
----------------------------------------- C:\Users\edubl\Desktop\proj\scripts\fit.py -----------------------------------------
Traceback (most recent call last):
File "c:\users\edubl\desktop\proj\venv-proj\lib\site-packages\ploomber\tasks\abc.py", line 562, in _build
res = self._run()
File "c:\users\edubl\desktop\proj\venv-proj\lib\site-packages\ploomber\tasks\abc.py", line 669, in _run
self.run()
File "c:\users\edubl\desktop\proj\venv-proj\lib\site-packages\ploomber\tasks\notebook.py", line 524, in run
path_to_out_ipynb.rename(path_to_out)
File "C:\Users\edubl\miniconda3\envs\scaffold\lib\pathlib.py", line 1359, in rename
self._accessor.rename(self, target)
FileExistsError: [WinError 183] Cannot create a file when that file already exists: 'C:\\Users\\edubl\\Desktop\\proj\\products\\report.ipynb' -> 'C:\\Users\\edubl\\Desktop\\proj\\products\\report.html'
The above exception was the direct cause of the following exception:
``` | 0easy
|
Title: take in a single string as user_defined_models, in addition to an array
Body: right now we force the user to pass in an array, even if they're just passing in a single model to use.
| 0easy
|
Title: Document how user input to Trigger on works
Body: ### AutoKey is a Xorg application and will not function in a Wayland session. Do you use Xorg (X11) or Wayland?
Xorg
### Has this issue already been reported?
- [X] I have searched through the existing issues.
### Is this a question rather than an issue?
- [X] This is not a question.
### What type of issue is this?
Documentation
### Choose one or more terms that describe this issue:
- [X] autokey triggers
- [ ] autokey-gtk
- [ ] autokey-qt
- [ ] beta
- [ ] bug
- [ ] critical
- [ ] development
- [X] documentation
- [ ] enhancement
- [ ] installation/configuration
- [ ] phrase expansion
- [ ] scripting
- [ ] technical debt
- [X] user interface
### Other terms that describe this issue if not provided above:
Undocumented feature
### Which Linux distribution did you use?
N/A
### Which AutoKey GUI did you use?
Qt
### Which AutoKey version did you use?
0.96.0-beta.10
### How did you install AutoKey?
From AutoKey repo debs
### Can you briefly describe the issue?
In a comment to #406, @luziferius mentions that the Trigger on dropdown selection field accepts user input. AFAIK, this is not documented anywhere. It needs to be expored (in the code and in use) and documented.
### Can the issue be reproduced?
Always
### What are the steps to reproduce the issue?
Clck on `Set` Abbreviations on an action
Click on the Trigger on field
Type something and see what happens ...
### What should have happened?
Didn't expect that typing here was allowed
### What actually happened?
Typing is allowed, but there is no guidance as to what is allowed or how it can be used.
### Do you have screenshots?
_No response_
### Can you provide the output of the AutoKey command?
```bash
N/A
```
### Anything else?
The source code will have to be examined to see how user input to this field is processed and how this feature can be used. | 0easy
|
Title: Implement DepthToSpace as a function
Body: DepthToSpace can be expressed as a function but it is not done yet: https://onnx.ai/onnx/operators/onnx__DepthToSpace.html#summary | 0easy
|
Title: make clickable title
Body: In the main view of Mercury, please make a notebook title as a link to the notebook.

Titles should work the same as the open button. | 0easy
|
Title: Bug: Equality of DocVec
Body: Equality between instances of DocVec does not seem to work properly:
```python
from docarray import BaseDoc, DocList
class CustomDocument(BaseDoc):
image: str
da = DocList[CustomDocument](
[CustomDocument(image='hi') for _ in range(10)]
).to_doc_vec()
da2 = DocList[CustomDocument](
[CustomDocument(image='hi') for _ in range(10)]
).to_doc_vec()
print(da == da2)
```
```bash
False
``` | 0easy
|
Title: Add option to disable reset button in `gr.Slider`
Body: - [x] I have searched to see if a similar issue already exists.
I notice that a reset button has been added to slider components. That is nice, but what would be even better if you could add reset buttons for other components as well. For my workflow I specifically need reset buttons for (single-select) dropdown components and radio components.
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Additional context**
Add any other context or screenshots about the feature request here.
| 0easy
|
Title: Templates should be explicitly prohibited with WHILE
Body: We support templates with IF and FOR but not with TRY or WHILE. With TRY templates are explicitly prohibited, but with WHILE they are not. For example, the following is executed 10k times (default iteration limit) and fails each time with a strange error "Keyword name cannot be empty.".
```robotframework
*** Test Cases ***
Example
[Template] Log
WHILE True
Hello!
END
```
This should be changed so that execution fails immediately with a clear message like "WHILE does not support templates.".
With TRY the validation that templates aren't used is currently done in the phase when the parsing model is transformed to the execution model. It would be better to do that already earlier when the parsing model is validated to make the error visible also for IDE plugins and other tools working with the parsing model. This change can be done as part of this issue at the same time when validation is added to WHILE. | 0easy
|
Title: Missing test
Body: While working on a related compute module, I noticed that `drop_nodes()` is not tested. is it intentional or an oversight? | 0easy
|
Title: ON DELETE SET NULL (column) not supported at the moment?
Body: ### Describe the use case
Im trying to issue a `ON DELETE SET NULL (column)` as described in the [Postgres docs](https://www.postgresql.org/docs/16/ddl-constraints.html) since version 15.
Im getting this error from SQLAlchemy which makes me think we don't support this yet. I also can't find this mentioned in the docs.
I am using sqlalchemy 2.0.31.
`sqlalchemy.exc.CompileError: Unexpected SQL phrase: 'SET NULL (applied_exchange_rate_id, base_currency_amount)' (matching against '^(?:RESTRICT|CASCADE|SET NULL|NO ACTION|SET DEFAULT)$')`
### Databases / Backends / Drivers targeted
postgresql and driver psycopg2
### Example Use
```
applied_exchange_rate_id = Column(pg_UUID, ForeignKey("exchange_rates.id", ondelete="SET NULL (applied_exchange_rate_id, base_currency_amount)"), nullable=True)
applied_custom_exchange_rate_id = Column(pg_UUID, ForeignKey("exchange_rates.id", ondelete="SET NULL (applied_custom_exchange_rate_id, base_currency_amount)"), nullable=True)
```
### Additional context
_No response_ | 0easy
|
Title: Improve type detection mechanism for IDs
Body: In Lux, we detect attributes that look like an ID and avoid visualizing them.

<img src="https://user-images.githubusercontent.com/5554675/93588430-fadb8300-f9dd-11ea-84a1-ba985fe9410a.png" width=200>
There are several issues related to the current type detection mechanisms:
1) The function [`check_if_id_like`](https://github.com/lux-org/lux/blob/master/lux/utils/utils.py#L73) needs to be improved so that we are not relying on `attribute_contain_id` check too much, i.e. even if the attribute name does not contain ID but looks like an ID, we should still label it as an ID. The cardinality check `almost_all_vals_unique` is a good example since most ID fields are largely unique. Another check we could implement is checking that the ID is spaced by a regular interval (e.g., 200,201,202,...), this is somewhat of a weak signal, since it not a necessary property of ID.
~~BUG: We only trigger ID detection currently if the data type of the attribute is detected as an integer ([source](https://github.com/lux-org/lux/blob/master/lux/executor/PandasExecutor.py#L258)). We should fix this bug so that string attributes that are ID like (e.g., a CustomerID in the Churn dataset like "7590-VHVEG") are also detected as IDs.~~
Some test data can be found [here](https://github.com/lux-org/lux-datasets), feel free to find your own on Kaggle or elsewhere. For a pull request, please include [tests](https://github.com/lux-org/lux/blob/master/tests/test_type.py) to try out the bugfix on several datasets to verify that ID fields are being detected and that non-ID fields are not detected.
| 0easy
|
Title: Poof() should return ax
Body: Make sure that ll visualizers return the `self.ax` when calling poof(). This is so that users can always get access to the ax, for example when working inside a function in a notebook. In addition it will encourage the behavior pattern of tweaking the plot if desired after poof(). We will be adjusting our documentation to use this form to also encourage this behavior:
```
ax = viz.poof()
```
This why this behavior pattern is useful:
https://stackoverflow.com/questions/47450804/yellowbrick-increasing-font-size-on-yellowbrick-generated-charts
| 0easy
|
Title: Fix broken links in doc
Body: The doc/ directory contains our documentation. Some files there link to our [projects](https://github.com/ploomber/projects) repo, however, we're rolling out a folder layout change soon, which you can see by going to the [layout branch](https://github.com/ploomber/projects/tree/layout)
To avoid confusion: this issue is about fixing links on this repo that point to https://github.com/ploomber/projects
Example:
`spec-api-python` in the master branch, becomes `guides/spec-api-python` in the layout branch.
We need help updating those links, essentially we need to look for links containing `https://github.com/ploomber/projects/tree/master` then update the last part to match the new layout.
e.g.
`https://github.com/ploomber/projects/tree/master/spec-api-python`
becomes
`https://github.com/ploomber/projects/tree/master/guides/spec-api-python`
Here's an example of a file that needs an update: https://github.com/ploomber/ploomber/blob/master/doc/cookbook/grid.rst
note that the last link is: https://github.com/ploomber/projects/blob/master/mlflow/README.ipynb
However `mlflow` ([master branch)](https://github.com/ploomber/projects/tree/master/mlflow) moved to `templates/mlflow` ([layout branch)](https://github.com/ploomber/projects/tree/layout/templates/mlflow)
| 0easy
|
Title: `cached_property` attributes are called when importing library
Body: RobotFramework incorrectly calls cached_property when importing a Python library using the cached_property method.
I found a previous issue on this topic, #4838, and it seems that the issue has been solved in 6.1.1. But in fact, the issue still exists. Please check the following steps, explanations, and suggestions.
Steps:
1. Python version 3.11.6
2. RobotFramework version is 6.1.1.
3. Create a python file named test.py as below, two methods decorated by cached_property, one will print some message, and the other will raise a RuntimeError.
```python
from functools import cached_property
class TestCached:
@cached_property
def cached_property_with_print(self):
print(f"This is in cached_property_with_print")
@cached_property
def cached_property_with_error(self):
raise RuntimeError("This is in cached_property_with_error")
```
4. Create a robot file named test.robot as below, using the Import Library to import test.TestCached class.
```robot
*** Settings ***
Documentation demo cached_property issue
*** Test Cases ***
1: import library with cached_property
Import Library test.TestCached
```
5. Run the test.robot and the console output is shown as below.
```bash
==============================================================================
Testcase
==============================================================================
Testcase.Test :: demo cached_property issue
==============================================================================
1: import library with cached_property This is in cached_property_with_print
[ WARN ] Imported library 'test.TestCached' contains no keywords.
1: import library with cached_property | PASS |
------------------------------------------------------------------------------
Testcase.Test :: demo cached_property issue | PASS |
1 test, 1 passed, 0 failed
==============================================================================
Testcase | PASS |
1 test, 1 passed, 0 failed
==============================================================================
```
6. From the console output, the cached_property_with_print was called, but there is no RuntimeError.(This is why issue #4838 was treated as solved)
Explantions:
1. Check the source code of RobotFramework, file robot.running.testlibraries, line 340-354:
```python
class _ClassLibrary(_BaseTestLibrary):
def _get_handler_method(self, libinst, name):
for item in (libinst,) + inspect.getmro(libinst.__class__):
# `isroutine` is used before `getattr` to avoid calling properties.
if (name in getattr(item, '__dict__', ())
and inspect.isroutine(item.__dict__[name])):
try:
method = getattr(libinst, name)
except Exception:
message, traceback = get_error_details()
raise DataError(f'Getting handler method failed: {message}',
traceback)
return self._validate_handler_method(method)
raise DataError('Not a method or function.')
```
The condition `if (name in getattr(item, '__dict__', ()) and inspect.isroutine(item.__dict__[name]))` cannot filter out the cached_property method. The inspect.isroutine will give True for the cached_property instance. So the cached_property method will be called by line `method = getattr(libinst, name)`. When this line met error, a DataError will be raised to it caller.
2. Check the source code of RobotFramework, file robot.running.testlibraries, line 271-276:
```python
def _try_to_get_handler_method(self, libcode, name):
try:
return self._get_handler_method(libcode, name)
except DataError as err:
self._adding_keyword_failed(name, err, self.get_handler_error_level)
return None
```
When the error is raised from line `return self._get_handler_method(libcode, name)`, then the line `self._adding_keyword_failed(name, err, self.get_handler_error_level)` will be executed. The `_adding_keyword_failed` will add error message to the syslog file (refer the syslog as below). So there is no error message shown in console output or log file. This is why issue #4838 is considered resolved. In that case, no error raised to console output mislead us.
3. check syslog:
```
20231027 22:53:04.080 | DEBUG | Started keyword 'BuiltIn.Import Library'.
20231027 22:53:04.087 | INFO | Imported library class 'test.TestCached' from 'test.py'.
20231027 22:53:04.090 | INFO | In library 'test.TestCached': Adding keyword 'cached_property_with_error' failed: Getting handler method failed: This is in cached_property_with_error
20231027 22:53:04.090 | DEBUG | Details:
Traceback (most recent call last):
File "...functools.py", line 1001, in __get__
val = self.func(instance)
^^^^^^^^^^^^^^^^^^^
File "...test.py", line 11, in cached_property_with_error
raise RuntimeError("This is in cached_property_with_error")
RuntimeError: This is in cached_property_with_error
20231027 22:53:04.090 | INFO | In library 'test.TestCached': Adding keyword 'cached_property_with_print' failed: Not a method or function.
20231027 22:53:04.090 | INFO | In library 'test.TestCached': Adding keyword 'property_with_print' failed: Not a method or function.
```
Suggestions:
For the file robot.running.testlibraries, line 346, `and inspect.isroutine(item.__dict__[name])):`, add one more condition `and callable(item.__dict__[name])`. This property and cached_property instance are not callable, so this condition can filter out them. Then the if condition would look like:
```python
if (name in getattr(item, '__dict__', ())
and inspect.isroutine(item.__dict__[name])
and callable(item.__dict__[name])):
```
Hope this issue can be solved and help someone.
| 0easy
|
Title: DeprecationWarning: 'imghdr' is deprecated and slated for removal
Body: ### Current Behaviour
The imghdr module is deprecated in Python 3.11 by [PEP 594](https://peps.python.org/pep-0594/#imghdr) and is slated for removal in Python 3.13.
Ydata-profiling always imports it.
FYI, the streamlit folks recently addressed the identical environment regression: https://github.com/streamlit/streamlit/pull/7081
```
$ conda create -n ydata_env
...
$ (conda activate ydata_env && conda install -c conda-forge ydata-profiling)
...
$ (conda activate ydata_env && conda list | grep ydata && python --version && python -W error -c 'import ydata_profiling')
# packages in environment at /Users/jhanley/miniconda3/envs/ydata_env:
ydata-profiling 4.5.0 pyhd8ed1ab_0 conda-forge
Python 3.11.4
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/Users/jhanley/miniconda3/envs/ydata_env/lib/python3.11/site-packages/ydata_profiling/__init__.py", line 7, in <module>
from ydata_profiling.compare_reports import compare
File "/Users/jhanley/miniconda3/envs/ydata_env/lib/python3.11/site-packages/ydata_profiling/compare_reports.py", line 12, in <module>
from ydata_profiling.profile_report import ProfileReport
File "/Users/jhanley/miniconda3/envs/ydata_env/lib/python3.11/site-packages/ydata_profiling/profile_report.py", line 20, in <module>
from visions import VisionsTypeset
File "/Users/jhanley/miniconda3/envs/ydata_env/lib/python3.11/site-packages/visions/__init__.py", line 3, in <module>
from visions import types, typesets, utils
File "/Users/jhanley/miniconda3/envs/ydata_env/lib/python3.11/site-packages/visions/utils/__init__.py", line 4, in <module>
from visions.utils.monkeypatches import imghdr_patch, pathlib_patch
File "/Users/jhanley/miniconda3/envs/ydata_env/lib/python3.11/site-packages/visions/utils/monkeypatches/__init__.py", line 1, in <module>
from visions.utils.monkeypatches import imghdr_patch, pathlib_patch
File "/Users/jhanley/miniconda3/envs/ydata_env/lib/python3.11/site-packages/visions/utils/monkeypatches/imghdr_patch.py", line 2, in <module>
from imghdr import tests
File "/Users/jhanley/miniconda3/envs/ydata_env/lib/python3.11/imghdr.py", line 9, in <module>
warnings._deprecated(__name__, remove=(3, 13))
File "/Users/jhanley/miniconda3/envs/ydata_env/lib/python3.11/warnings.py", line 514, in _deprecated
warn(msg, DeprecationWarning, stacklevel=3)
DeprecationWarning: 'imghdr' is deprecated and slated for removal in Python 3.13
```
### Expected Behaviour
The `import` with `-W error` should succeed without warnings, just like `python -c 'import ydata_profiling'` succeeds.
### Data Description
[empty set]
### Code that reproduces the bug
```Python
(as above)
```
### pandas-profiling version
4.5.0
### Dependencies
```Text
$ (conda activate ydata_env && conda --version && conda list)
conda 23.7.2
# packages in environment at /Users/jhanley/miniconda3/envs/ydata_env:
#
# Name Version Build Channel
attrs 23.1.0 pyh71513ae_1 conda-forge
brotli 1.0.9 hb7f2c08_9 conda-forge
brotli-bin 1.0.9 hb7f2c08_9 conda-forge
brotli-python 1.0.9 py311h814d153_9 conda-forge
bzip2 1.0.8 h0d85af4_4 conda-forge
ca-certificates 2023.7.22 h8857fd0_0 conda-forge
certifi 2023.7.22 pyhd8ed1ab_0 conda-forge
charset-normalizer 3.2.0 pyhd8ed1ab_0 conda-forge
colorama 0.4.6 pyhd8ed1ab_0 conda-forge
contourpy 1.1.0 py311h5fe6e05_0 conda-forge
cycler 0.11.0 pyhd8ed1ab_0 conda-forge
dacite 1.8.0 pyhd8ed1ab_0 conda-forge
dataclasses 0.8 pyhc8e2a94_3 conda-forge
fonttools 4.42.0 py311h2725bcf_0 conda-forge
freetype 2.12.1 h3f81eb7_1 conda-forge
htmlmin 0.1.12 py_1 conda-forge
idna 3.4 pyhd8ed1ab_0 conda-forge
imagehash 4.3.1 pyhd8ed1ab_0 conda-forge
jinja2 3.1.2 pyhd8ed1ab_1 conda-forge
joblib 1.3.2 pyhd8ed1ab_0 conda-forge
kiwisolver 1.4.4 py311hd2070f0_1 conda-forge
lcms2 2.15 h2dcdeff_1 conda-forge
lerc 4.0.0 hb486fe8_0 conda-forge
libblas 3.9.0 17_osx64_openblas conda-forge
libbrotlicommon 1.0.9 hb7f2c08_9 conda-forge
libbrotlidec 1.0.9 hb7f2c08_9 conda-forge
libbrotlienc 1.0.9 hb7f2c08_9 conda-forge
libcblas 3.9.0 17_osx64_openblas conda-forge
libcxx 16.0.6 hd57cbcb_0 conda-forge
libdeflate 1.18 hac1461d_0 conda-forge
libexpat 2.5.0 hf0c8a7f_1 conda-forge
libffi 3.4.2 h0d85af4_5 conda-forge
libgfortran 5.0.0 12_3_0_h97931a8_1 conda-forge
libgfortran5 12.3.0 hbd3c1fe_1 conda-forge
libjpeg-turbo 2.1.5.1 hb7f2c08_0 conda-forge
liblapack 3.9.0 17_osx64_openblas conda-forge
libopenblas 0.3.23 openmp_h429af6e_0 conda-forge
libpng 1.6.39 ha978bb4_0 conda-forge
libsqlite 3.42.0 h58db7d2_0 conda-forge
libtiff 4.5.1 hf955e92_0 conda-forge
libwebp-base 1.3.1 h0dc2134_0 conda-forge
libxcb 1.15 hb7f2c08_0 conda-forge
libzlib 1.2.13 h8a1eda9_5 conda-forge
llvm-openmp 16.0.6 hff08bdf_0 conda-forge
markupsafe 2.1.3 py311h2725bcf_0 conda-forge
matplotlib-base 3.7.1 py311h2bf763f_0 conda-forge
multimethod 1.4 py_0 conda-forge
munkres 1.1.4 pyh9f0ad1d_0 conda-forge
ncurses 6.4 hf0c8a7f_0 conda-forge
networkx 3.1 pyhd8ed1ab_0 conda-forge
numpy 1.23.5 py311h62c7003_0 conda-forge
openjpeg 2.5.0 h13ac156_2 conda-forge
openssl 3.1.2 h8a1eda9_0 conda-forge
packaging 23.1 pyhd8ed1ab_0 conda-forge
pandas 2.0.3 py311hab14417_1 conda-forge
patsy 0.5.3 pyhd8ed1ab_0 conda-forge
phik 0.12.3 py311h0482ae9_0 conda-forge
pillow 10.0.0 py311h7cb0e2d_0 conda-forge
pip 23.2.1 pyhd8ed1ab_0 conda-forge
platformdirs 3.10.0 pyhd8ed1ab_0 conda-forge
pooch 1.7.0 pyha770c72_3 conda-forge
pthread-stubs 0.4 hc929b4f_1001 conda-forge
pybind11-abi 4 hd8ed1ab_3 conda-forge
pydantic 1.10.12 py311h2725bcf_1 conda-forge
pyparsing 3.1.1 pyhd8ed1ab_0 conda-forge
pysocks 1.7.1 pyha2e5f31_6 conda-forge
python 3.11.4 h30d4d87_0_cpython conda-forge
python-dateutil 2.8.2 pyhd8ed1ab_0 conda-forge
python-tzdata 2023.3 pyhd8ed1ab_0 conda-forge
python_abi 3.11 3_cp311 conda-forge
pytz 2023.3 pyhd8ed1ab_0 conda-forge
pywavelets 1.4.1 py311hd5badaa_0 conda-forge
pyyaml 6.0 py311h5547dcb_5 conda-forge
readline 8.2 h9e318b2_1 conda-forge
requests 2.31.0 pyhd8ed1ab_0 conda-forge
scipy 1.10.1 py311h16c3c4d_3 conda-forge
seaborn 0.12.2 hd8ed1ab_0 conda-forge
seaborn-base 0.12.2 pyhd8ed1ab_0 conda-forge
setuptools 68.0.0 pyhd8ed1ab_0 conda-forge
six 1.16.0 pyh6c4a22f_0 conda-forge
statsmodels 0.14.0 py311h4a70a88_1 conda-forge
tangled-up-in-unicode 0.2.0 pyhd8ed1ab_0 conda-forge
tk 8.6.12 h5dbffcc_0 conda-forge
tqdm 4.66.0 pyhd8ed1ab_0 conda-forge
typeguard 2.13.3 pyhd8ed1ab_0 conda-forge
typing-extensions 4.7.1 hd8ed1ab_0 conda-forge
typing_extensions 4.7.1 pyha770c72_0 conda-forge
tzdata 2023c h71feb2d_0 conda-forge
urllib3 2.0.4 pyhd8ed1ab_0 conda-forge
visions 0.7.5 pyhd8ed1ab_0 conda-forge
wheel 0.41.1 pyhd8ed1ab_0 conda-forge
wordcloud 1.9.2 py311h2725bcf_1 conda-forge
xorg-libxau 1.0.11 h0dc2134_0 conda-forge
xorg-libxdmcp 1.1.3 h35c211d_0 conda-forge
xz 5.2.6 h775f41a_0 conda-forge
yaml 0.2.5 h0d85af4_2 conda-forge
ydata-profiling 4.5.0 pyhd8ed1ab_0 conda-forge
zstd 1.5.2 h829000d_7 conda-forge
```
```
### OS
MacOS Monterey 12.6.8
### Checklist
- [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [X] The issue has not been resolved by the entries listed under [Common Issues](https://pandas-profiling.ydata.ai/docs/master/pages/support_contrib/common_issues.html). | 0easy
|
Title: Support semicolon (;) as a way to suppress output
Body: ### Description
In Jupyter, if you have a cell that ends in a function call with a semicolon, the output is supressed:
```python
5
```
vs
```python
5;
```

In Marimo, this is not the case:

The only way to achieve the same result is with a dummy assignment:
```
_ = 5
```
which is clearly unpleasant.
### Suggested solution
Follow Jupyter and interpret the `;` on the last statement as a indication that the output should be suppressed.
### Alternative
_No response_
### Additional context
This was on marimo 0.9.14 | 0easy
|
Title: Add new language features from 3.6 to cfg
Body: | 0easy
|
Title: bazel-lint all BUILD files
Body: ### Description
Bazel linter precommit hook is checked in and enabled for partial directory in [PR](https://github.com/ray-project/ray/pull/50869).
We would like to enable it folder by folder.
- (assigned) python folder: https://github.com/ray-project/ray/issues/51091
### Use case
_No response_ | 0easy
|
Title: Add exception/warning when all features are dropped during feature selection
Body: There can be a situation when all features are dropped during feature selection. Need to handle it. Maybe by throwing exception or raising a warning.
Code to reproduce:
```py
import numpy as np
from supervised import AutoML
X = np.random.uniform(size=(1000, 31))
y = np.random.randint(0, 2, size=(1000,))
automl = AutoML(
algorithms=["CatBoost", "Xgboost", "LightGBM"],
model_time_limit=30*60,
start_random_models=10,
hill_climbing_steps=3,
top_models_to_improve=3,
golden_features=True,
feature_selection=True,
stack_models=True,
train_ensemble=True,
explain_level=0,
validation_strategy={
"validation_type": "kfold",
"k_folds": 4,
"shuffle": False,
"stratify": True,
}
)
automl.fit(X, y)
```
Error:
```
Drop features ['feature_13', 'feature_27', 'feature_19', 'feature_25', 'feature_2', 'feature_15', 'feature_6', 'feature_26', 'random_feature', 'feature_17', 'feature_3', 'feature_4', 'feature_10', 'feature_12', 'feature_16', 'feature_1', 'feature_30', 'feature_29', 'feature_21', 'feature_22', 'feature_23', 'feature_24', 'feature_8', 'feature_31', 'feature_5', 'feature_28', 'feature_11', 'feature_9', 'feature_18', 'feature_14', 'feature_7', 'feature_20']
* Step features_selection will try to check up to 3 models
Traceback (most recent call last):
File "examples/scripts/bug.py", line 25, in <module>
automl.fit(X, y)
File "/home/piotr/sandbox/mljar-supervised/supervised/automl.py", line 289, in fit
return self._fit(X, y)
File "/home/piotr/sandbox/mljar-supervised/supervised/base_automl.py", line 670, in _fit
raise e
File "/home/piotr/sandbox/mljar-supervised/supervised/base_automl.py", line 657, in _fit
trained = self.train_model(params)
File "/home/piotr/sandbox/mljar-supervised/supervised/base_automl.py", line 229, in train_model
mf.train(model_path)
File "/home/piotr/sandbox/mljar-supervised/supervised/model_framework.py", line 147, in train
learner.fit(X_train, y_train, X_validation, y_validation, log_to_file)
File "/home/piotr/sandbox/mljar-supervised/supervised/algorithms/catboost.py", line 89, in fit
verbose_eval=False,
File "/home/piotr/sandbox/mljar-supervised/venv_mljs/lib/python3.6/site-packages/catboost/core.py", line 4292, in fit
silent, early_stopping_rounds, save_snapshot, snapshot_file, snapshot_interval, init_model)
File "/home/piotr/sandbox/mljar-supervised/venv_mljs/lib/python3.6/site-packages/catboost/core.py", line 1793, in _fit
save_snapshot, snapshot_file, snapshot_interval, init_model
File "/home/piotr/sandbox/mljar-supervised/venv_mljs/lib/python3.6/site-packages/catboost/core.py", line 1680, in _prepare_train_params
baseline, column_description)
File "/home/piotr/sandbox/mljar-supervised/venv_mljs/lib/python3.6/site-packages/catboost/core.py", line 985, in _build_train_pool
group_weight=group_weight, subgroup_id=subgroup_id, pairs_weight=pairs_weight, baseline=baseline)
File "/home/piotr/sandbox/mljar-supervised/venv_mljs/lib/python3.6/site-packages/catboost/core.py", line 392, in __init__
self._check_data_empty(data)
File "/home/piotr/sandbox/mljar-supervised/venv_mljs/lib/python3.6/site-packages/catboost/core.py", line 546, in _check_data_empty
raise CatBoostError("Input data must have at least one feature")
_catboost.CatBoostError: Input data must have at least one feature
``` | 0easy
|
Title: [Core] Runtime env working_dir validation
Body: ### What happened + What you expected to happen
#51377
We should do validation and fail early if working_dir contains invalid characters
### Versions / Dependencies
master
### Reproduction script
https://github.com/ray-project/ray/pull/51377
### Issue Severity
None | 0easy
|
Title: unify some naming
Body: sometimes we call our schmas/paremetrization `self._schema`, sometimes it is `self.document_type`, other times it is `self.doc_schema`. This should be unified. | 0easy
|
Title: Pokemon's Location Area Encounters docs inconsistent with API response.
Body: The property `"location_area_encounters"` at the Pokemon endpoint now returns a string pointing to an URL
Example, for Resource Jirachi: `"location_area_encounters": "/api/v2/pokemon/385/encounters"`
According to the docs, it should return an array:
`"location_area_encounters": [],`
Is this a mistake or a change in the API?
| 0easy
|
Title: Add IsQuarterStart primitive
Body: - This primitive determines the `is_quarter_start` of a datetime column | 0easy
|
Title: ollama llm requires credentials
Body: ollama llm blocks require credentials but that shouldnβt be required | 0easy
|
Title: create_or_modify_pyi not working in some environments.
Body: ### Describe the bug
The latest Conda package (for 5.7 and 5.5) do not seem to include `blocks_events.pyi` anymore.
This in most cases isn't a major issue because Gradio itself will generate it using `create_or_modify_pyi`.
The problem is that the function can fail in confusing ways in some conditions.
For example if Gradio is installed system wide (IE: by an admin user) and then started by an unprivileged user, the function will fail as it can't write the `.pyi` file.
In such case the error is also confusing because you get a `FileNotFoundError: [Errno 2] No such file or directory: '.../python3.11/site-packages/gradio/blocks_events.pyi'` error. Which tells you nothing about why the file doesn't exist.
That is because the write operation is wrapped in a context manager suppressing write errors
https://github.com/gradio-app/gradio/blob/de42c85661faff84bd02960751b12c4bb722e153/gradio/component_meta.py#L131-L132
And thus the write will never crash but the subsequent read will
Suggested fixes:
* Add `blocks_events.pyi` to conda packages so that Gradio doesn't have to generate it at runtime.
* Remove `with no_raise_exception():` from `create_or_modify_py`. Given that the subsequent `read_text` call in `create_or_modify_pyi` there doesn't seem to be any benefit in using the `with no_raise_exception():` context manager. The function will just crash anyway in the subsequent line of code and will instead emit an uninformative error. If the concern is related to concurrency and race conditions over the file. That can be solved by writing the file to a temporary location and then moving it to its target destination.
### Have you searched existing issues? π
- [X] I have searched and found no existing issues
### Reproduction
1. Install Gradio as root
2. Delete gradio/block_events.pyi
3. Launch Gradio as another user that can't write to system-packages
### Screenshot
_No response_
### Logs
_No response_
### System Info
```shell
Gradio 5.7.1 installed via Conda on Python 3.11
```
### Severity
I can work around it | 0easy
|
Title: Expose pagination module
Body: <!--- Provide a general summary of the changes you want in the title above. -->
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
## Feature Request Type
- [ ] Core functionality
- [x] Alteration (enhancement/optimization) of existing feature(s)
- [ ] New behavior
## Description
Hello, would it be possible to add pagination to the list of exported modules to `strawberry_django/__init__.py`? Attempting to access the submodule sometimes throws an error. If so, I'd be happy to submit a pull request!
Code:
```py
import strawberry_django
strawberry_django.pagination.apply # This reports an warning
from strawberry_django.pagination import apply
apply # This is fine
```
Warning:
```
"pagination" is not a known member of module "strawberry_django"
``` | 0easy
|
Title: [Bug]: `triton_scaled_mm` never used on ROCm
Body: ### Your current environment
<details>
<summary>The output of `python collect_env.py`</summary>
```text
INFO 03-07 02:02:58 [__init__.py:207] Automatically detected platform rocm.
Collecting environment information...
PyTorch version: 2.5.1+rocm6.2
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 6.2.41133-dd7f95766
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 10.5.0-1ubuntu1~22.04) 10.5.0
Clang version: Could not collect
CMake version: version 3.31.6
Libc version: glibc-2.35
Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-131-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: AMD Instinct MI300X (gfx942:sramecc+:xnack-)
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 6.2.41133
MIOpen runtime version: 3.2.0
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 104
On-line CPU(s) list: 0-103
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8470
CPU family: 6
Model: 143
Thread(s) per core: 1
Core(s) per socket: 52
Socket(s): 2
Stepping: 8
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
L1d cache: 4.9 MiB (104 instances)
L1i cache: 3.3 MiB (104 instances)
L2 cache: 208 MiB (104 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-51
NUMA node1 CPU(s): 52-103
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] pytorch-triton-rocm==3.1.0
[pip3] pyzmq==26.2.1
[pip3] torch==2.5.1+rocm6.2
[pip3] torchaudio==2.5.1+rocm6.2
[pip3] torchvision==0.20.1+rocm6.2
[pip3] transformers==4.49.0
[conda] Could not collect
ROCM Version: 6.2.41134-65d174c3e
Neuron SDK Version: N/A
vLLM Version: 0.7.4.dev189+gae122b1cb
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
============================ ROCm System Management Interface ============================
================================ Weight between two GPUs =================================
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7
GPU0 0 15 15 15 15 15 15 15
GPU1 15 0 15 15 15 15 15 15
GPU2 15 15 0 15 15 15 15 15
GPU3 15 15 15 0 15 15 15 15
GPU4 15 15 15 15 0 15 15 15
GPU5 15 15 15 15 15 0 15 15
GPU6 15 15 15 15 15 15 0 15
GPU7 15 15 15 15 15 15 15 0
================================= Hops between two GPUs ==================================
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7
GPU0 0 1 1 1 1 1 1 1
GPU1 1 0 1 1 1 1 1 1
GPU2 1 1 0 1 1 1 1 1
GPU3 1 1 1 0 1 1 1 1
GPU4 1 1 1 1 0 1 1 1
GPU5 1 1 1 1 1 0 1 1
GPU6 1 1 1 1 1 1 0 1
GPU7 1 1 1 1 1 1 1 0
=============================== Link Type between two GPUs ===============================
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7
GPU0 0 XGMI XGMI XGMI XGMI XGMI XGMI XGMI
GPU1 XGMI 0 XGMI XGMI XGMI XGMI XGMI XGMI
GPU2 XGMI XGMI 0 XGMI XGMI XGMI XGMI XGMI
GPU3 XGMI XGMI XGMI 0 XGMI XGMI XGMI XGMI
GPU4 XGMI XGMI XGMI XGMI 0 XGMI XGMI XGMI
GPU5 XGMI XGMI XGMI XGMI XGMI 0 XGMI XGMI
GPU6 XGMI XGMI XGMI XGMI XGMI XGMI 0 XGMI
GPU7 XGMI XGMI XGMI XGMI XGMI XGMI XGMI 0
======================================= Numa Nodes =======================================
GPU[0] : (Topology) Numa Node: 0
GPU[0] : (Topology) Numa Affinity: 0
GPU[1] : (Topology) Numa Node: 0
GPU[1] : (Topology) Numa Affinity: 0
GPU[2] : (Topology) Numa Node: 0
GPU[2] : (Topology) Numa Affinity: 0
GPU[3] : (Topology) Numa Node: 0
GPU[3] : (Topology) Numa Affinity: 0
GPU[4] : (Topology) Numa Node: 1
GPU[4] : (Topology) Numa Affinity: 1
GPU[5] : (Topology) Numa Node: 1
GPU[5] : (Topology) Numa Affinity: 1
GPU[6] : (Topology) Numa Node: 1
GPU[6] : (Topology) Numa Affinity: 1
GPU[7] : (Topology) Numa Node: 1
GPU[7] : (Topology) Numa Affinity: 1
================================== End of ROCm SMI Log ===================================
LD_LIBRARY_PATH=/home/luka/git/vllm/.venv/lib/python3.10/site-packages/cv2/../../lib64:
NCCL_CUMEM_ENABLE=0
TORCHINDUCTOR_COMPILE_THREADS=1
CUDA_MODULE_LOADING=LAZY
```
</details>
### π Describe the bug
I found an issue with vLLM and block fp8 linear, where the ROCm platform is incorrectly using a cutlass execution path. Because the cutlass path is always disabled on ROCm, this kernel is never reached, and instead we fall back on either `w8a8_block_fp8_matmul` or `torch.scaled_mm`.
The way we got there:
- @rasmith added the triton kernel `triton_scaled_mm` into `custom_ops.cutlass_scaled_mm` (not the right place for it in my opinion) in [127c074](https://github.com/vllm-project/vllm/commit/127c07480ecea15e4c2990820c457807ff78a057)
- @hongxiayang added DeepSeek support, using the cutlass path where cutlass_block_fp8_supported was True by default in [c36ac98](https://github.com/vllm-project/vllm/commit/c36ac98d0118537ec5f3f405a68311a10f9b59a5)
- @LucasWilkinson fixed the default of `cutlass_block_fp8_supported` param to `cutlass_block_fp8_supported()` which always returns False on ROCm in [76abd0c](https://github.com/vllm-project/vllm/commit/76abd0c88143419826bfc13d2cd29669d0fdfa1b).
The effect of this is that triton_scaled_mm is currently never used.
I think the path forward is to move `triton_scaled_mm` out of the `custom_ops.cutlass_scaled_mm`. This should likely be done as part of larger refactoring of the FP8 code, including the new `Fp8LinearOp` added in #14390. Additionally, it would be good so (at least somewhat) unify the `triton_scaled_mm` with `w8a8_block_fp8_matmul`, which is the fallback for `apply_block_fp8_linear`.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | 0easy
|
Title: Order Variables, Pools and Connections lists by ID as default
Body: ### Body
I realized that the new AIP-38 screens for Variables, Pools and Connections are using un-sorted lists by default. Lists are just ordered however the DB returns them.
I propose that the three lists are sorted by ID by default
### Committer
- [x] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | 0easy
|
Title: Contribute `Radar chart` to Vizro visual vocabulary
Body: ## Thank you for contributing to our visual-vocabulary! π¨
Our visual-vocabulary is a dashboard, that serves a a comprehensive guide for selecting and creating various types of charts. It helps you decide when to use each chart type, and offers sample Python code using [Plotly](https://plotly.com/python/), and instructions for embedding these charts into a [Vizro](https://github.com/mckinsey/vizro) dashboard.
Take a look at the dashboard here: https://huggingface.co/spaces/vizro/demo-visual-vocabulary
The source code for the dashboard is here: https://github.com/mckinsey/vizro/tree/main/vizro-core/examples/visual-vocabulary
## Instructions
0. Get familiar with the dev set-up (this should be done already as part of the initial intro sessions)
1. Read through the [README](https://github.com/mckinsey/vizro/tree/main/vizro-core/examples/visual-vocabulary) of the visual vocabulary
2. Follow the steps to contribute a chart. Take a look at other examples. This [commit](https://github.com/mckinsey/vizro/pull/634/commits/417efffded2285e6cfcafac5d780834e0bdcc625) might be helpful as a reference to see which changes are required to add a chart.
3. Ensure the app is running without any issues via `hatch run example visual-vocabulary`
4. List out the resources you've used in the [README](https://github.com/mckinsey/vizro/tree/main/vizro-core/examples/visual-vocabulary)
5. Raise a PR
**Useful resources:**
- Radar: https://plotly.com/python/radar-chart/
- Data chart mastery: https://www.atlassian.com/data/charts/how-to-choose-data-visualization | 0easy
|
Title: [FEATURE REQUEST] Evaluate metric for WMAPE
Body: I've gone through the document for the evaluate metrics, however I couldn't find if there is support WMAPE. Since some of the actual may be zero and then MAPE is not a good metric to evaluate. Not sure if there is any way I can implement the WMAPE?
Thanks in advance | 0easy
|
Title: [FEA] GFQL constant-valued predicate type checking
Body: **Is your feature request related to a problem? Please describe.**
We had an error where the user ran chain
```python
[n({'n': '0'}), e_forward(), n()]
```
And got not hits, b/c it was a type error, should have been:
```python
[n({node_id: 0}), e_forward(), n()]
```
This could have warned or raised ahead of time with a clear message
**Describe the solution you'd like**
This query does not type check --- `'0' : str` will never match `g._nodes.n.dtype.name == 'int64'`
So a few things:
* `g.chain()`: By default, we can run & raise constant predictate type mismatches
* `g.chain_remote()`: Return a client error (422) on errors here
* Add optional arg `validation='skip'|'warn'|'error'`
**Describe alternatives you've considered**
I was trying to think of a way we can do python-typed predicate functions, not just primitve value checking, but was not clear
We should probably also support both pandas 1 (obj = str or obj) and pandas 2 (str dtype)
| 0easy
|
Title: [Request] Allow repeated k-fold cross validation
Body: In order to reduce overfitting, I would like to ask for a new parameter: "n_repetitions". This parameter sets the number of complete sets of folds to compute for repeated k-fold cross-validation.
Cross-validation example:
```
{
"validation_type": "kfold",
"k_folds": 5,
"n_repetitions": 3, # new
"shuffle": True,
"stratify": True,
"random_seed": 123
}
``` | 0easy
|
Title: Add suport for touch event callbacks
Body: Hiya,
Love the library - interactive notebooks are the future!
I managed to whip up a really simple drawing-pad program on my laptop that worked well in Jupyter Lab, however when I tried loading the same notebook on a tablet (tried Firefox and Chrome) whilst it seems like `mouse_down` events are triggered, dragging didn't work, so drawing didn't work. I wonder whether adding the set of Touch Events might solve this problem:
https://developer.mozilla.org/en-US/docs/Web/Events#Touch_events
Thanks,
Hugh
| 0easy
|
Title: webapp: Show DNSKEY in domain info
Body: Some registries don't accept DS records (they insist on DNSKEY). Our API provides the value (next to the DS values). Let's expose it in the GUI. | 0easy
|
Title: Recreating control structure results from JSON fails if they have messages mixed with iterations/branches
Body: It's not possible to create control structures that contain messages mixed with branches/iterations normally, but listeners can log at any time and that can result to this kind of result structures. For example, if `start_try` logs something, the message will be the first item the TRY structure contains before actual TRY/EXCEPT branches.
This is somewhat unlikely thing to happen and it could be considered a bug if a listener really log in places like this. Because the end result is that execution results are unusable, it is somewhat severe anyway. | 0easy
|
Title: Bot.******_chat_invite_link() return type does not match documentation
Body: ## Context
* Operating System: Win10
* Python Version: 3.9.2
* aiogram version: 2.12.1
* aiohttp version: 3.7.4.post0
## Expected Behavior
bot.create_chat_invite_link() return types.ChatInviteLink object
bot.edit_chat_invite_link() return types.ChatInviteLink object
bot.revoke_chat_invite_link() return types.ChatInviteLink object
link to code:
https://github.com/aiogram/aiogram/blob/79c59b34f9141427d221842662dcef2eb90a558f/aiogram/bot/bot.py#L1805
## Current Behavior
all returned values is dicts
### Steps to Reproduce
```python
create = await bot.create_chat_invite_link(chat_id)
print(type(create), create)
edit = await bot.edit_chat_invite_link(chat_id, create["invite_link"], member_limit=15)
print(type(edit), edit)
revoke = await bot.revoke_chat_invite_link(chat_id, create["invite_link"])
print(type(revoke), revoke)
```
### Failure Logs
```python
<class 'dict'> {'invite_link': 'https://t.me/joinchat/xxxxxxxxxxx', 'creator': {'id': xxxxxxxxxxx, 'is_bot': True, 'first_name': 'xxxxxxxxxxxx', 'username': 'xxxxxxxxxxxx'}, 'is_primary': False, 'is_revoked': False}
<class 'dict'> {'invite_link': 'https://t.me/joinchat/xxxxxxxxxxx', 'creator': {'id': xxxxxxxxxxx, 'is_bot': True, 'first_name': 'xxxxxxxxxxxx', 'username': 'xxxxxxxxxxxx'}, 'member_limit': 15, 'is_primary': False, 'is_revoked': False}
<class 'dict'> {'invite_link': 'https://t.me/joinchat/xxxxxxxxxxx', 'creator': {'id': xxxxxxxxxxx, 'is_bot': True, 'first_name': 'xxxxxxxxxxxx', 'username': 'xxxxxxxxxxxx'}, 'member_limit': 15, 'is_primary': False, 'is_revoked': True}
```
| 0easy
|
Title: [BUG] encode_axis binding wrong field
Body: **Describe the bug**
`g.encode_axis(rows=[...]).plot()` has no effect
It appears to not be under key `"node_encodings"`, so a no-op. Not sure why the server is accepting.
**To Reproduce**
Works:
```python
g2 = g._complex_encodings['node_encodings'] = {
'current': {},
'default': {
"pointAxisEncoding": {
"graphType": "point",
"encodingType": "axis",
"variation": "categorical",
"attribute": "degree",
"rows": [
{"label": "a", "r": 60, "internal": True },
{"label": "b", "r": 120, "external": True },
]
}
}
}
g2.plot()
```
Doesn't:
```python
g2 = g.encode_axis([
{'r': 14, 'external': True, 'label': 'outermost'},
{'r': 12, 'external': True},
{'r': 10, 'space': True},
{'r': 8, 'space': True},
{'r': 6, 'internal': True},
{'r': 4, 'space': True},
{'r': 2, 'space': True, 'label': 'innermost'}
])
g2.plot()
```
Note lack of field `"node_encodings"` at https://github.com/graphistry/pygraphistry/blob/37f70e79a87c3122968c69e6e72e0d253a677c14/graphistry/PlotterBase.py#L373
**Expected behavior**
Radial axis
**Actual behavior**
Viz without axis
**PyGraphistry API client environment**
0.28.1
| 0easy
|
Title: [Tech debt] Improve interface for RandomShadow
Body: Right now in the transform we have separate parameters for ` num_shadows_lower` and ` num_shadows_upper`
Better would be to have one parameter ` num_shadows_range = [ num_shadows_lower, num_shadows_upper]`
=>
We can update transform to use new signature, keep old as working, but mark as deprecated.
----
PR could be similar to https://github.com/albumentations-team/albumentations/pull/1615 | 0easy
|
Title: Topic 2 Workbook issue
Body: In >topic2_part2_telecom_churn_tsne.ipynb
```
fig, axes = plt.subplots(nrows=3, ncols=4, figsize=(16, 10))
for idx, feat in enumerate(features):
sns.boxplot(x='Churn', y=feat, data=df, ax=axes[idx / 4, idx % 4])
axes[idx / 4, idx % 4].legend()
axes[idx / 4, idx % 4].set_xlabel('Churn')
axes[idx / 4, idx % 4].set_ylabel(feat);
```
would yield following error
```
IndexError Traceback (most recent call last)
<ipython-input-15-b3733e9b0263> in <module>()
2
3 for idx, feat in enumerate(features):
----> 4 sns.boxplot(x='Churn', y=feat, data=df, ax=axes[idx / 4, idx % 4])
5 axes[idx / 4, idx % 4].legend()
6 axes[idx / 4, idx % 4].set_xlabel('Churn')
IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices
```
Can be fixed by either replacing `idx / 4` -> `int(idx / 4)` or `idx / 4` -> `idx // 4`
| 0easy
|
Title: Adjusting docs links
Body: Since we changed the domain of the docs to docs.ploomber.io
We need to search in all of our repos: ploomber, soorgeon, soopervisor and projects:
Look for the ploomber.readthedocs.io address and replace it with docs.ploomber.io
Please follow the contribution guildelines for the docs.
| 0easy
|
Title: Request: add 'how to seed' to the readme.md
Body: The readme.md currently reads:
>Note
>
>Use of fully randomized data in tests is quickly a problem for reproducing broken builds. To that purpose, factory_boy provides helpers to handle the random seeds it uses.
however, that is the only mention of the word "seed" in the entire readme, could a bit be added that shows how to actually seed a Faker so multiple runs with the same see yield the same data? | 0easy
|
Title: Korean translation
Body: We already have PR #5178 adding Korean. It has few issues to be fixed, but getting it into RF 7.1 ought to still be possible. | 0easy
|
Title: add more usage examples
Body:
### Description
add more examples about how to use the package. A description of the yaml file and commands need to be provided
| 0easy
|
Title: Encoding of pytest test names is not preserved
Body: [//]: # (
. Note: for support questions, please use Stackoverflow or Gitter**.
. This repository's issues are reserved for feature requests and bug reports.
.
. In case of any problems with Allure Jenkins plugin** please use the following repository
. to create an issue: https://github.com/jenkinsci/allure-plugin/issues
.
. Make sure you have a clear name for your issue. The name should start with a capital
. letter and no dot is required in the end of the sentence. An example of good issue names:
.
. - The report is broken in IE11
. - Add an ability to disable default plugins
. - Support emoji in test descriptions
)
Hi !
#### I'm submitting a ...
- [x] bug report
- [ ] feature request
- [ ] support request => Please do not submit support request here, see note at the top of this template.
#### What is the current behavior?
Test names are not properly utf8-encoded in results files.
#### If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem
Here is a very simple test.
```python
# test_allure.py
def test_musΓ©e(self):
assert False
```
JSON output:
```json
{"name": "test_musΓΒ©e", "status": "failed", "statusDetails": {"message": "AssertionError: assert False", "trace": "E assert False"}, "start": 1527000563172, "stop": 1527000563173, "uuid": "80c6f937-6366-4a1c-a194-f159517e1e4a", "historyId": "b9238153ea1ae00f41b1b2da19fba81e", "fullName": "test_allure#test_musΓΒ©e", "labels": [{"name": "host", "value": "adrien-XPS"}, {"name": "thread", "value": "9176-MainThread"}, {"name": "framework", "value": "pytest"}, {"name": "language", "value": "cpython3"}, {"name": "package", "value": "test_allure"}]}
```
#### What is the expected behavior?
UTF8 names are preserved.
#### Please tell us about your environment:
- Allure version: 2.6.0
- Test framework: pytest@3.5.1
- Allure adaptor: allure-pytest@2.3.3b1
#### Other information
I suppose that [`escape_name`](https://github.com/allure-framework/allure-python/blob/master/allure-pytest/src/utils.py#L102) function is to blame here. But I don't really understand how it is used and why.
| 0easy
|
Title: build: `make docs-serve` watching polyfactory dir in litestar project
Body: https://github.com/litestar-org/litestar/blob/042468cd8af6a852d1758735507ed9305daf9734/Makefile#L149 | 0easy
|
Title: Rename `coloralpha` to `opacity` for consistently with Plotly Express
Body: | 0easy
|
Title: [New feature] Add apply_to_images to Blur
Body: | 0easy
|
Title: Testing: Cover 100% with unit-testing
Body: | 0easy
|
Title: [Feature Request] Add documentations to describe debug_info
Body: Add documentations to describe what to do with the various error messages from `debug_info`. Related to #457.
Also add checker in `show_version` to ensure the correct version of Lux is installed (not the incorrect `pip install lux`) | 0easy
|
Title: Ensure that categorical colormaps are registered as ListedColormap
Body: Generally categorical colormaps should be registered as `ListedColormap` instead of a `LinearSegmentedColormap` so the Glasbey colormaps should be switched. | 0easy
|
Title: `prepare_image` in Kandinsky pipelines doesn't support `torch.Tensor`
Body: Hi, I want to report a bug in Kandinsky pipelines.
https://github.com/huggingface/diffusers/blob/2f0f281b0d808c05bc7a974e68d298a006dd120a/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_img2img.py#L413-L420
According to the above contents, elements in `image` can be either `PIL.Image.Image` or `torch.Tensor`.
https://github.com/huggingface/diffusers/blob/2f0f281b0d808c05bc7a974e68d298a006dd120a/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_img2img.py#L98-L104
However, the `prepare_image` function is only for `PIL.Image.Image`, and does not support `torch.Tensor`.
Can you resolve this problem by implementing an image resize function for `torch.Tensor`? | 0easy
|
Title: Update type(...) to import the Type from it's canonical location
Body: I missed that in the previous PR but those should likely be :
```
from types import MethodDescriptorType
from types import ModuleType
```
_Originally posted by @Carreau in https://github.com/ipython/ipython/pull/14029#discussion_r1180062854_
| 0easy
|
Title: Update docs include syntax for source examples
Body: ### Privileged issue
- [X] I'm @tiangolo or he asked me directly to create an issue here.
### Issue Content
This is a good first contribution. :nerd_face:
The code examples shown in the docs are actual Python files. They are even tested in CI, that's why you can always copy paste an example and it will always work, the example is tested.
The way those examples are included in the docs used a specific format. But now there's a new format available that is much simpler and easier to use than the previous one, in particular in complex cases, for example when there are examples in multiple versions of Python.
But not all the docs have the new format yet. The docs should use the new format to include examples. That is the task. :nerd_face:
**It should be done as one PR per page updated.**
## Simple Example
Before, the format was like:
````markdown
```Python hl_lines="1 4"
{!./docs_src/tutorial/create_db_and_table/tutorial001_py310.py!}
```
````
Now the new format looks like:
````markdown
{* ./docs_src/tutorial/create_db_and_table/tutorial001_py310.py hl[1,4] *}
````
* Instead of `{!` and `!}` it uses `{*` and `*}`
* It no longer has a line above with:
````markdown
```Python
````
* And it no longer has a line below with:
````markdown
```
````
* The highlight is no longer a line with e.g. `hl_lines="3"` (to highlight line 3), but instead in the same line there's a `hl[3]`.
## Multiple Python Versions
In many cases there are variants of the same example for multiple versions of Python, or for using `Annotated` or not.
In those cases, the current include examples have syntax for tabs, and notes saying `Annotated` should be preferred. For example:
````markdown
//// tab | Python 3.10+
```Python hl_lines="1 4"
{!./docs_src/tutorial/create_db_and_table/tutorial001_py310.py!}
```
////
//// tab | Python 3.7+
```Python hl_lines="3 6"
{!./docs_src/tutorial/create_db_and_table/tutorial001.py!}
```
////
````
In these cases, it should be updated to only include the first one (the others will be included automatically :sunglasses: ):
````markdown
{* ./docs_src/tutorial/create_db_and_table/tutorial001_py310.py hl[1,4] *}
````
* The syntax for tabs is also removed, all the other variants are included automatically.
* The highlight lines are included for that same first file, the fragment with `hl_lines="1 4"` is replaced with `hl[1,4]`
## Highlight Lines
### Simple Lines
When there's a fragment like:
````markdown
hl_lines="4 8 12"
````
That means it is highlighting the lines 4, 8, and 12.
The new syntax is on the same include line:
````markdown
hl[4,8,12]
````
* It separates individual lines by commas.
* It uses `hl`, with square brackets around.
### Line Ranges
When there are line ranges, like:
````markdown
hl_lines="4-6"
````
That means it is highlighting lines from 4 to 6 (so, 4, 5, and 6).
The new syntax uses `:` instead of `-` for the ranges:
````markdown
hl[4:6]
````
### Multiple Highlights
There are some highlights that include individual lines and also line ranges, for example the old syntax was:
````markdown
hl_lines="2 4-6 8-11 13"
````
That means it is highlighting:
* Line 2
* Lines from 4 to 6 (so, 4, 5, and 6)
* Lines from 8 to 11 (so, 8, 9, 10, and 11)
* Line 13
The new syntax separates by commas instead of spaces:
````markdown
hl[2,4:6,8:11,13]
````
## Include Specific Lines
In some cases, there are specific lines included instead of the entire file.
For example, the old syntax was:
````markdown
```Python hl_lines="1 4"
{!./docs_src/tutorial/create_db_and_table/tutorial001_py310.py[ln:1-8]!}
# More code here later π
```
````
In this example, the lines included are from line 1 to line 8 (lines 1, 2, 3, 4, 5, 6, 7, 8). In the old syntax, it's defined with the fragment:
````markdown
[ln:1-8]
````
In the new syntax, the included code from above would be:
````markdown
{* ./docs_src/tutorial/create_db_and_table/tutorial001_py310.py ln[1:8] hl[1,4] *}
````
* The lines to include that were defined with the fragment `[ln:1-8]`, are now defined with `ln[1:8]`
The new syntax `ln` as in `ln[1:8]` also supports multiple lines and ranges to include.
### Comments Between Line Ranges
In the old syntax, when there are ranges of code included, there are comments like:
````markdown
# Code below omitted π
````
The new syntax generates those comments automatically based on the line ranges.
### Real Example
A more real example of the include with the old syntax looked like this:
````markdown
//// tab | Python 3.10+
```Python hl_lines="1 4"
{!./docs_src/tutorial/create_db_and_table/tutorial001_py310.py[ln:1-8]!}
# More code here later π
```
////
//// tab | Python 3.7+
```Python hl_lines="3 6"
{!./docs_src/tutorial/create_db_and_table/tutorial001.py[ln:1-10]!}
# More code here later π
```
////
/// details | π Full file preview
//// tab | Python 3.10+
```Python
{!./docs_src/tutorial/create_db_and_table/tutorial001_py310.py!}
```
////
//// tab | Python 3.7+
```Python
{!./docs_src/tutorial/create_db_and_table/tutorial001.py!}
```
////
///
````
In the new syntax, that is replaced with this:
````markdown
{* ./docs_src/tutorial/create_db_and_table/tutorial001_py310.py ln[1:8] hl[1,4] *}
````
* The only file that needs to be included and defined is the first one, and the lines to include and highlight are also for the first file only.
* All the other file includes, full file preview, comments, etc. are generated automatically.
---
An example PR: https://github.com/fastapi/sqlmodel/pull/1149
## Line Ranges and Highlights
In the old syntax, the `hl_lines="15"` refers to highlighting the resulting lines.
For example, with the old syntax:
````markdown
```Python hl_lines="15"
# Code above omitted π
{!./docs_src/advanced/uuid/tutorial001_py310.py[ln:37-54]!}
# Code below omitted π
```
````
The result is rendered something like:
```Python hl_lines="15"
# Code above omitted π
def select_hero():
with Session(engine) as session:
hero_2 = Hero(name="Spider-Boy", secret_name="Pedro Parqueador")
session.add(hero_2)
session.commit()
session.refresh(hero_2)
hero_id = hero_2.id
print("Created hero:")
print(hero_2)
print("Created hero ID:")
print(hero_id)
statement = select(Hero).where(Hero.id == hero_id) # THIS LINE IS HIGHLIGHTED
selected_hero = session.exec(statement).one()
print("Selected hero:")
print(selected_hero)
print("Selected hero ID:")
print(selected_hero.id)
# Code below omitted π
```
And the highlight would be on the line with the comment `# THIS LINE IS HIGHLIGHTED`.
If you count the lines in that snippet, the first line has:
```Python
# Code above omitted π
```
And the line 15 in that snippet has:
```Python
statement = select(Hero).where(Hero.id == hero_id) # THIS LINE IS HIGHLIGHTED
```
Not the entire source file was included, only lines 37 to 54. And that highlighted line inside of the source file is actually line 49. But the `hl_lines="15"` refers to the line 15 in the **rendered** snippet of code.
So, with the old syntax, when you want to highlight lines, you have to include the file and see the rendered result, count lines in the rendered document, and then mark the lines to highlight based on that result, wait for the reload to check if the line is correct, etc. ...it's a very slow process.
But with the new syntax, the number that you use is the line in the actual source file, so, if the line to highlight in the source file is line 49, that's what you define in the include:
````markdown
{* ./docs_src/advanced/uuid/tutorial001_py310.py ln[37:54] hl[49] *}
````
This way it's easier to declare the lines or line ranges to include and the lines to highlight by just checking the source file. All the comments in between ranges and complex math will be done automatically.
## Help
Do you want to help? Please do!
Remember **it should be done as one PR per page updated.**
If you see a page that doesn't fit these cases, leave it as is, I'll take care of it later.
Before submitting a PR, check if there's another one already handling that file.
Please name the PR including the file path, for example:
````markdown
π Update includes for `docs/tutorial/create-db-and-table.md`
```` | 0easy
|
Title: Add trigger name to kwargs
Body: To understand what trigger is running, add a variable with the trigger name to kwards.
| 0easy
|
Title: New indicator suggestion(Clenow momentum)
Body: **Is your feature request related to a problem? Please describe.**
My request is a new indicator called Clenow momentum.
**Describe the solution you'd like**
It measures momentum by getting the exponential regression of log prices and the Coefficient of Exponential Regression depending on the rolling days. It can detect trends in a stock as well as the direction of the stock.
**Additional context**
I've found several references on github referring to clenow momentum but none of them could calculate it in a pandas data frame. It would be awesome if it could be added to this library to set it apart from ta-lib. I also don't know how to codify it because I'm new to this programming. Here's further reference:
https://chrischow.github.io/dataandstuff/2018-11-10-how-not-to-invest-clenow-momentum/
it's also similar to this:
https://www.tradingview.com/script/QWHjwm4B-Exponential-Regression-Slope-Annualized-with-R-squared-Histogram/
I also found this reference but it's too abstract for me:
https://teddykoker.com/2019/05/momentum-strategy-from-stocks-on-the-move-in-python/ | 0easy
|
Title: Add the missing docstrings to the `request_node.py` file
Body: Add the missing docstrings to the [request_node.py](https://github.com/scanapi/scanapi/blob/main/scanapi/tree/request_node.py) file
[Here](https://github.com/scanapi/scanapi/wiki/First-Pull-Request#7-make-your-changes) you can find instructions of how we create the [docstrings](https://www.python.org/dev/peps/pep-0257/#what-is-a-docstring).
Child of https://github.com/scanapi/scanapi/issues/411 | 0easy
|
Title: WARN level missing from the log level selector in log.html
Body: If you run tests with `--loglevel warn` and tests use `Set Log Level` to change the level to something lower, the log level selector will be shown but the selected option has no value. Clicking the selector allows selecting levels, so this is just a visual UI problem. Running tests with the WARN level is rare, but we could have the level there also otherwise to allow hiding also INFO messages. Implementing this requires only adding `WARN` as one of the possible options in the selector.
The same issue occurs also with the ERROR level. I see even less needs to run tests with the ERROR level than there are needs to use WARN and having WARN is already enough to allow hiding INFO messages. Due to how the log level selector is implemented, this would also require considerably more work. I don't think it's worth the effort. | 0easy
|
Title: Please add OTT (Optimized trend tracker) to Pandas_TA
Body: Hi Kevin,
Please help to add OTT to pandas_ta by KivancOzbilgicin trading view.
i converted the pinescript to python, it has correct output though my logic seems to be not efficient as it takes 10 seconds or so to calculate.
i think it has almost similar logic with QQE. Thanks in advance
=================
my version
==============
```python
#OTT variables
pds = 2
percent = 1.4
alpha = 2 / (pds + 1)
df['atr'] = talib.ATR(df['high'], df['low'], df['close'], timeperiod=14)
np_atr = np.array(df['atr'])
ratr = format(np_atr[-2], '.4f')
df['ud1'] = np.where(df['close'] > df['close'].shift(1), (df['close'] - df['close'].shift()) , 0)
df['dd1'] = np.where(df['close'] < df['close'].shift(1), (df['close'].shift() - df['close']) , 0)
df['UD'] = talib.SUM(df['ud1'], timeperiod=9)
df['DD'] = talib.SUM(df['dd1'], timeperiod=9)
df['CMO'] = ((df['UD'] - df['DD']) / (df['UD'] + df['DD'])).fillna(0).abs()
# df['Var'] = talib.EMA(df['close'], timeperiod=5)
df['Var'] = 0.0
for i in range(pds, len(df)):
df['Var'].iat[i] = (alpha * df['CMO'].iat[i] * df['close'].iat[i]) + (1 - alpha * df['CMO'].iat[i]) * df['Var'].iat[i-1]
df['fark'] = df['Var'] * percent * 0.01
df['newlongstop'] = df['Var'] - df['fark']
df['newshortstop'] = df['Var'] + df['fark']
df['longstop'] = 0.0
df['shortstop'] = 999999999999999999
# df['dir'] = 1
for i in df['RSI_MA']:
def maxlongstop():
df.loc[(df['newlongstop'] > df['longstop'].shift(1)) , 'longstop'] = df['newlongstop']
df.loc[(df['longstop'].shift(1) > df['newlongstop']), 'longstop'] = df['longstop'].shift(1)
return df['longstop']
def minshortstop():
df.loc[(df['newshortstop'] < df['shortstop'].shift(1)), 'shortstop'] = df['newshortstop']
df.loc[(df['shortstop'].shift(1) < df['newshortstop']), 'shortstop'] = df['shortstop'].shift(1)
return df['shortstop']
df['longstop']= np.where (
(
(df['Var'] > df['longstop'].shift(1))
),maxlongstop(),df['newlongstop']
)
df['shortstop'] = np.where(
(
(df['Var'] < df['shortstop'].shift(1))
), minshortstop(), df['newshortstop'])
#get xover
df['xlongstop'] = np.where (
(
(df['Var'].shift(1) > df['longstop'].shift(1)) &
(df['Var'] < df['longstop'].shift(1))
), 1,0)
df['xshortstop'] =np.where(
(
(df['Var'].shift(1) < df['shortstop'].shift(1)) &
(df['Var'] > df['shortstop'].shift(1))
), 1,0)
df['trend']=0
df['dir'] = 0
for i in df['RSI_MA']:
df['trend'] = np.where(
(
(df['xshortstop'] == 1)
),1, (np.where((df['xlongstop'] == 1),-1,df['trend'].shift(1)))
)
df['dir'] = np.where(
(
(df['xshortstop'] == 1)
),1, (np.where((df['xlongstop'] == 1),-1,df['dir'].shift(1).fillna(1)))
)
#get OTT
df['MT'] = np.where(df['dir'] == 1, df['longstop'], df['shortstop'])
df['OTT'] = np.where(df['Var'] > df['MT'], (df['MT'] * (200 + percent) / 200), (df['MT'] * (200 - percent) / 200))
df['OTT'] = df['OTT'].shift(2)
```
```
============
Pine version
//@version=4
// This source code is subject to the terms of the Mozilla Public License 2.0 at https://mozilla.org/MPL/2.0/
// Β© KivancOzbilgic
//created by: @Anil_Ozeksi
//developer: ANIL ΓZEKΕΔ°
//author: @kivancozbilgic
study("Optimized Trend Tracker","OTT", overlay=true)
src = input(close, title="Source")
length=input(2, "OTT Period", minval=1)
percent=input(1.4, "OTT Percent", type=input.float, step=0.1, minval=0)
showsupport = input(title="Show Support Line?", type=input.bool, defval=true)
showsignalsk = input(title="Show Support Line Crossing Signals?", type=input.bool, defval=true)
showsignalsc = input(title="Show Price/OTT Crossing Signals?", type=input.bool, defval=false)
highlight = input(title="Show OTT Color Changes?", type=input.bool, defval=false)
showsignalsr = input(title="Show OTT Color Change Signals?", type=input.bool, defval=false)
highlighting = input(title="Highlighter On/Off ?", type=input.bool, defval=true)
mav = input(title="Moving Average Type", defval="VAR", options=["SMA", "EMA", "WMA", "TMA", "VAR", "WWMA", "ZLEMA", "TSF"])
Var_Func(src,length)=>
valpha=2/(length+1)
vud1=src>src[1] ? src-src[1] : 0
vdd1=src<src[1] ? src[1]-src : 0
vUD=sum(vud1,9)
vDD=sum(vdd1,9)
vCMO=nz((vUD-vDD)/(vUD+vDD))
VAR=0.0
VAR:=nz(valpha*abs(vCMO)*src)+(1-valpha*abs(vCMO))*nz(VAR[1])
VAR=Var_Func(src,length)
Wwma_Func(src,length)=>
wwalpha = 1/ length
WWMA = 0.0
WWMA := wwalpha*src + (1-wwalpha)*nz(WWMA[1])
WWMA=Wwma_Func(src,length)
Zlema_Func(src,length)=>
zxLag = length/2==round(length/2) ? length/2 : (length - 1) / 2
zxEMAData = (src + (src - src[zxLag]))
ZLEMA = ema(zxEMAData, length)
ZLEMA=Zlema_Func(src,length)
Tsf_Func(src,length)=>
lrc = linreg(src, length, 0)
lrc1 = linreg(src,length,1)
lrs = (lrc-lrc1)
TSF = linreg(src, length, 0)+lrs
TSF=Tsf_Func(src,length)
getMA(src, length) =>
ma = 0.0
if mav == "SMA"
ma := sma(src, length)
ma
if mav == "EMA"
ma := ema(src, length)
ma
if mav == "WMA"
ma := wma(src, length)
ma
if mav == "TMA"
ma := sma(sma(src, ceil(length / 2)), floor(length / 2) + 1)
ma
if mav == "VAR"
ma := VAR
ma
if mav == "WWMA"
ma := WWMA
ma
if mav == "ZLEMA"
ma := ZLEMA
ma
if mav == "TSF"
ma := TSF
ma
ma
MAvg=getMA(src, length)
fark=MAvg*percent*0.01
longStop = MAvg - fark
longStopPrev = nz(longStop[1], longStop)
longStop := MAvg > longStopPrev ? max(longStop, longStopPrev) : longStop
shortStop = MAvg + fark
shortStopPrev = nz(shortStop[1], shortStop)
shortStop := MAvg < shortStopPrev ? min(shortStop, shortStopPrev) : shortStop
dir = 1
dir := nz(dir[1], dir)
dir := dir == -1 and MAvg > shortStopPrev ? 1 : dir == 1 and MAvg < longStopPrev ? -1 : dir
MT = dir==1 ? longStop: shortStop
OTT=MAvg>MT ? MT*(200+percent)/200 : MT*(200-percent)/200
``` | 0easy
|
Title: WebDAV methods not supported
Body: Falcon defines supported HTTP methods in `falcon/constants.py`: supported are "usual" `HTTP_METHODS` and, in addition to that, `WEBDAV_METHODS`. However, only WebDAV versioning extension methods from RFC 3253 are supported, but not the "ordinary" WebDAV ones (i.e. from RFCs 2518 & 4918) like `COPY`, `LOCK`, `MKCOL`, `MOVE` etc.
Supporting only an extension, but not the core upon which that extension builds looks somewhat inconsistent. | 0easy
|
Title: Rewrite tests with PyTest
Body: With pytest it is easy to create a single app context for testing, so we should get rid of nose and unitests and use pytest
| 0easy
|
Title: Zero HMC acceptance rate for standard normal
Body: Step size tuning in HMC fails on a standard Gaussian. Acceptance rate is 0, regardless of the step size. The problem goes away with NUTS.
The problem is shown in this notebook:
https://colab.research.google.com/drive/1QBrROnhUEAOcP3Q9-pT8AfwFjZdu9y1y?usp=sharing | 0easy
|
Title: Codify style for doctring summary lines
Body: Currently, we ask contributors to strive for consistency with existing code, but it would be helpful to clarify the following regarding docstrings:
- Docstrings should begin with a short (~70 characters or less) summary line that ends in a period.
- The summary line should begin immediately after the opening quotes (do not add a line break before the summary line)
- The summary line should describe what it *is* if it is a class (e.g., "An asynchronous, file-like object for reading ASGI streams.")
- The summary line should describe what it *does* when called, if it is a function, structured as an imperative (e.g., "Delete a header that was previously set for this response.")
We will need two patches to address this issue:
- [x] Update CONTRIBUTING.md to include the guidelines above
- [x] Fix up any existing docstrings to provide a baseline of consistency going forward | 0easy
|
Title: Used Sqlalchemy_utils types
Body: ### Checklist
- [X] There are no similar issues or pull requests for this yet.
### Is your feature related to a problem? Please describe.
I want to be able to use model types provided by sqlalchemy_utils such as EmailType, IPAddressType, and UUIDType.
### Describe the solution you would like.
I would like the admin page to work without crashing when they are used (just treating them all as strings would work fine for me).
### Describe alternatives you considered
I could convert them all to strings in my model, but that would take away a lot of functionality.
### Additional context
_No response_ | 0easy
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.