text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: tSNE Value Error when no classes are specified
Body: See [yellowbrick t-SNE fit raises ValueError](https://stackoverflow.com/questions/48950135/yellowbrick-t-sne-fit-raises-valueerror):
The issue has to do with the way numpy arrays are evaluated in 1.13 - we need to change [yellowbrick.text.tsne Line 256](https://github.com/DistrictDataLabs/yellowbrick/blob/develop/yellowbrick/text/tsne.py#L256) to `if y is not None and self.classes_ is None`
code to produce:
```python
import pandas as pd
from yellowbrick.text import TSNEVisualizer
from sklearn.datasets import make_classification
## produce random data
X, y = make_classification(n_samples=200, n_features=100,
n_informative=20, n_redundant=10,
n_classes=3, random_state=42)
## visualize data with t-SNE
tsne = TSNEVisualizer()
tsne.fit(X, y)
tsne.poof()
```
error raised:
```
Traceback (most recent call last):
File "t.py", line 12, in <module>
tsne.fit(X, y)
File "/Users/benjamin/Workspace/ddl/yellowbrick/yellowbrick/text/tsne.py", line 256, in fit
if y and self.classes_ is None:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
``` | 0easy
|
Title: Using numpy.int64 as `offset` argument for `ema()` causes it to be treated as a zero
Body: **Which version are you running? The lastest version is on Github. Pip is for major releases.**
```python
import pandas_ta as ta
print(ta.version)
```
0.3.14b0
**Do you have _TA Lib_ also installed in your environment?**
```sh
$ pip list
```
Yes, tried both with and without TA-Lib installed
**Have you tried the _development_ version? Did it resolve the issue?**
```sh
$ pip install -U git+https://github.com/twopirllc/pandas-ta.git@development
```
Yes, 0.3.79b0. The error still occurs
**Describe the bug**
When using an numpy int64 instead of an int for at least the offset value in `df.ta.ema()`, it gets treated as a zero and gives the wrong result.
**To Reproduce**
Provide sample code.
```python
import pandas as pd
import numpy as np
import pandas_ta as ta
print(ta.version)
s = pd.DataFrame(range(1,20), columns=['close'])
five = 5
five_int64 = np.int64(five)
assert(s.ta.ema(3, five).equals(s.ta.ema(3, int(five_int64)))) # works
assert(s.ta.ema(3, five).equals(s.ta.ema(3, five_int64))) # does not work, but should
```
**Expected behavior**
When an int-like is passed in, it is successfully treated like an int
**Additional context**
This might apply to other indicators also, I didn't test
| 0easy
|
Title: Tutorial request: Streamlit
Body: There have been multiple requests for a streamlit tutorial. For this task, we're looking for a tutorial or blogpost which showcases the use of `supabase` with Streamlit.
There are no restrictions on the type of tutorial and any number of people can take this issue up. This issue is open for creative exploration but feel free to tag/ping me if ideas are needed. | 0easy
|
Title: Feature Proposal: Enhanced Single-File Execution in Any Directory Structure
Body: #### **Current State**
- nextpy's `run` command requires a predefined directory structure for execution.
#### **Objective**
- Enable users to execute individual files with `nextpy run`, regardless of the directory structure, similar to tools like Streamlit .
#### **Proposal Details**
**1. Command Enhancement:**
- Modify `nextpy run` to primarily accept a file path:
- `-f` or `--file`: Path to the target file (relative or absolute).
- `-t` or `--theme`: Optional theme (defaults to standard if not provided).
**2. Operational Logic:**
- Ensure the file's existence at the given path.
- Determine the file type (e.g., `.mdx`, `.py`) and execute accordingly.
- For `.mdx` files, use the default MDX viewer, or apply the specified theme.
- For `.py` files, execute the script in an isolated or appropriate environment.
- Provide error messages and graceful exits for unsupported file types or execution errors.
**3. Environment Independence:**
- Eliminate dependency on a specific project structure.
- Set up the necessary minimal environment based on the file type.
- Allow environment variables or configurations to be specified via command line or external config files for flexibility.
**4. Theme Integration and Default Viewer:**
- Same as previously proposed.
#### **Sample Usage:**
**Run a File in Any Directory:**
```bash
$ nextpy run -f /path/to/file.mdx
$ nextpy run -f ./relative/path/to/script.py
```
**With Custom Theme:**
```bash
$ nextpy run -f /path/to/file.mdx -t bespoke_theme
```
#### **Resultant Execution:**
- Executing this command will focus on the specified file only, setting up a minimal environment as required.
- The command will be independent of the directory structure, allowing flexibility in file organization.
#### **Advantages:**
- Streamlines the process of running individual files, making it more accessible and flexible.
- Reduces the necessity for a comprehensive initial setup, facilitating quick testing and prototyping.
- Enhances the user experience, particularly beneficial for new users and in educational contexts.
#### **Implementation Considerations:**
- Robust error handling and informative messaging for user guidance.
- Extensive testing with various file locations and types to ensure reliability.
- Documentation updates to guide users on new command usage and capabilities.
#### **Potential Directory Structure Post-Execution:**
```bash
.any_directory
├── executed_file.mdx (or .py, etc.)
├── .web (if necessary for execution)
└── other_files_or_directories (unrelated to nextpy execution)
```
# TODO
### Step 1: Requirements Analysis
- [ ] Analyze the current functionality of the `nextpy run` command.
- [ ] Identify the limitations with respect to directory structure dependence.
### Step 2: Design Command Enhancements
- [ ] Design the new command line interface to accept a file path (`-f` or `--file`) and an optional theme (`-t` or `--theme`).
- [ ] Plan how the command will handle different file types (e.g., `.mdx`, `.py`).
### Step 3: Update Command Line Interface
- [ ] Modify the `run` command in Typer to accept the file path as a primary argument.
- [ ] Implement the logic to parse and handle the optional theme argument.
### Step 4: Implement File Detection and Execution Logic
- [ ] Develop a function to validate the existence of the specified file.
- [ ] Create a function to determine the file type based on the file extension or content.
- [ ] Implement the execution logic for `.mdx` files, including using the default MDX viewer or the specified theme.
- [ ] Implement the execution logic for `.py` files, ensuring they run correctly irrespective of the directory structure.
### Step 5: Environment Setup
- [ ] Establish a minimal environment setup required for the execution of the specified file.
- [ ] Ensure environment variable or configuration can be set externally if required.
### Step 6: Testing and Quality Assurance
- [ ] Conduct unit testing for new functionalities.
- [ ] Perform integration testing to ensure the new command works seamlessly with existing features.
- [ ] Test the command with various file types and in different directory structures.
### Step 7: Error Handling and User Feedback
- [ ] Implement robust error handling for unsupported file types, missing files, and execution errors.
- [ ] Develop user-friendly error messages and guidance for troubleshooting.
### Step 8: Documentation and Examples
- [ ] Update the project documentation to include the new command usage.
- [ ] Provide example use cases and command line inputs in the documentation. | 0easy
|
Title: 麻瓜的支付有问题,可以支付成功,却不能回调
Body: 支付之后,扣钱了,却无法得到密码。 | 0easy
|
Title: [PolygonZone] - drop `frame_resolution_wh` requirement
Body: ### Description
Currently, the initialization of [sv.PolygonZone](https://supervision.roboflow.com/0.19.0/detection/tools/polygon_zone/) requires passing `frame_resolution_wh` in the constructor. Upon further reflection, this seems totally unnecessary.
- Change the [`self.mask`](https://github.com/roboflow/supervision/blob/4729e20a9408fbe08d342cc4dbae835d808686a5/supervision/detection/tools/polygon_zone.py#L54) initialization located in the constructor. Instead of basing the shape of the mask on `frame_resolution_wh`, calculate the `x_max` and `y_max` of our `polygon` and create a mask of a size sufficient for the `polygon` to fit on it.
```python
x_max, y_max = np.max(polygon, axis=0)
self.mask = polygon_to_mask(
polygon=polygon, resolution_wh=(x_max + 1, y_max + 1)
)
```
- Print an appropriate deprecation warning whenever someone sets the value of `frame_resolution_wh`, notifying that `frame_resolution_wh` is no longer required and will be dropped in version `supervision-0.24.0`.
- Drop `frame_resolution_wh` from docs.
### API
Example usage after making the changes.
```python
import numpy as np
import supervision as sv
from ultralytics import YOLO
from supervision.assets import download_assets, VideoAssets
download_assets(VideoAssets.VEHICLES)
polygon = np.array([[1252, 787], [2298, 803], [5039, 2159], [-550, 2159]])
zone = sv.PolygonZone(polygon=polygon)
model = YOLO("yolov8x.pt")
color = sv.Color.ROBOFLOW
color_annotator = sv.ColorAnnotator(color=color)
def callback(scene: np.ndarray, index: int) -> np.ndarray:
result = model(scene)[0]
detections = sv.Detections.from_ultralytics(result)
detections_in_zone = detections[zone.trigger(detections=detections)]
annotated_frame = scene.copy()
annotated_frame = sv.draw_polygon(scene, polygon=polygon, color=color)
annotated_frame = color_annotator.annotate(scene, detections=detections_in_zone)
return annotated_frame
sv.process_video(
source_path="vehicles.mp4",
target_path="vehicles-result.mp4",
callback=callback
)
```
### Additional
- Note: Please share a Google Colab with minimal code to test the new feature. We know it's additional work, but it will speed up the review process. The reviewer must test each change. Setting up a local environment to do this is time-consuming. Please ensure that Google Colab can be accessed without any issues (make it public). Thank you! 🙏🏻 | 0easy
|
Title: Loading issues with docx files saved using the latest version of WPS
Body: **Describe the bug**
If using WPS version 12.1.0.19770 to save docx format files, when using Haystack's DOCXToDocument to read such files, the following error will be reported
**Error message**
```
[2025-01-21 17:04:03,529: WARNING/MainProcess] Could not read ByteStream(data=b'PK\x03\x04\x14\x00\x08\x08\x08\x00(\x8f5Z\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x13\x00\x00\x00[Content_Types].xml\xb5\x95\xcbN\xc30\x10E\x7f%\xf2\x16%N+(\x055\xed\x82\x87\xc4\x06\xba(kd\xecIj\x11?d\xbb\xa5\xfd{&iS\t\x8a\n"fg\xe7\x8e\xcf\x9d\x19...', meta={'file_path': '外卖配置.docx', 'field': 'lakala'}, mime_type='application/vnd.openxmlformats-officedocument.wordprocessingml.document') and convert it to a DOCX Document, skipping. Error: '_cython_3_0_11.cython_function_or_method' object has no attribute 'endswith'
```
**Expected behavior**
no bug
**Additional context**
no
**To Reproduce**
this is my test code.
```
import io
import docx
with open("/data1/hjc/rag/text.docx", "rb") as f:
files = f.read()
document = docx.Document(io.BytesIO(files))
elements = []
for i, element in enumerate(document.element.body):
print(type(element))
print(element)
print(".....")
if i != 0:
print(element.tag.endswith("p"))
```
this is result
```
<class 'lxml.etree._Comment'>
<!-- Created by docx4j 6.1.2 (Apache licensed) using REFERENCE JAXB in Oracle Java 1.8.0_131 on Linux -->
this is first line
.....
<class 'docx.oxml.text.paragraph.CT_P'>
<CT_P '<w:p>' at 0x7f469429ac00>
True
.....
<class 'docx.oxml.text.paragraph.CT_P'>
<CT_P '<w:p>' at 0x7f46931976f0>
True
.....
<class 'lxml.etree._Element'>
<Element {http://schemas.openxmlformats.org/wordprocessingml/2006/main}bookmarkEnd at 0x7f46931a1a00>
False
```
so, the reason for this error is that when saving docx files in the new version of WPS, an extra line of content will appear:
`<!-- Created by docx4j 6.1.2 (Apache licensed) using REFERENCE JAXB in Oracle Java 1.8.0_131 on Linux -->`
**FAQ Check**
- [√ ] Have you had a look at [our new FAQ page](https://docs.haystack.deepset.ai/docs/faq)?
**System:**
- OS: linux
- GPU/CPU: cpu
- Haystack version (commit or version number): 2.9.0
- DocumentStore:
- Reader:
- Retriever:
| 0easy
|
Title: Add favicon to report
Body: Add favicon to report
How it is now

It should have the ScanAPI icon
https://github.com/scanapi/design/blob/master/images/icon-light.png | 0easy
|
Title: VirusTotal
Body: **User story:** I have a malware sample (file or hash) or potentially malicious URL that I want to scan / gather more evidence on.
**Details**
- VirusTotal has multiple offerings. Their flagship product is [IOC REPUTATION & ENRICHMENT](https://docs.virustotal.com/reference/ip-info). They also have public and private APIs (enterprise).
- For Tracecat Beta, we will focus on the six most [popular public endpoints](https://docs.virustotal.com/reference/overview#most-popular-api-endpoints) in IOC search
- [x] [Get a file report by hash](https://docs.virustotal.com/reference/file-info)
- [x] [Get a URL analysis report](https://docs.virustotal.com/reference/url-info)
- [x] [Get a domain report](https://docs.virustotal.com/reference/domain-info)
- [x] [Get an IP address report](https://docs.virustotal.com/reference/ip-info)
- We will continue adding support for other endpoints according to user interest and needs
- Note: we do not plan support for [uploading files to VirusTotal](https://docs.virustotal.com/reference/files-scan) at the moment. This would require a more complex security model for quarantine and storing the original malicious files.
**Dependencies**
@daryllimyt we need some way to upload and pass along files `Content-Type: multipart/form-data`. VirusTotal does full file scans, not just hashes.
API reference: https://docs.virustotal.com/docs/api-overview
Prior art: [Official VT Python client](https://github.com/VirusTotal/vt-py) | 0easy
|
Title: Add examples for response & error handling
Body: We should add examples on how to write response and error handlers using `response_handler` and `error_handler`. Checkout [this guide](https://uplink.readthedocs.io/en/stable/user/quickstart.html#response-and-error-handling) for a quick overview of these decorators.
These examples should reside under the [`examples`](https://github.com/prkumar/uplink/tree/master/examples) directory. | 0easy
|
Title: Make contribution platform rules visible in analysis
Body: https://docs.github.com/en/account-and-profile/setting-up-and-managing-your-github-profile/managing-contribution-settings-on-your-profile/why-are-my-contributions-not-showing-up-on-my-profile
Augur does an excellent job taking all contribution types into account. There are platform-specific rules for which contributions are counted in GitHub statistics. Augur differences are based on value judgements and the culture of Augur using companies, groups, OSPO's, etc.
**Potential solutions:**
Make these differences visible and explain them in context.
| 0easy
|
Title: [Points] Clarify `Select All` Points layer keybinds
Body: Regarding Points selection keybindings:
_Originally posted by @mstabrin in https://github.com/napari/napari/issues/4809#issuecomment-1186945981_
> Currently it is implemented like:
> `control+a` -> Select all `shown` points in the slice
> `a` -> Select all `shown` points in the slice
> `shift+a` -> Select all points in the layer
>
> From a practical point of view it might be beneficial to have 3 options maybe? Like:
>
> `control+a` -> Select all points in the slice
> `a` -> Select all `shown` points in the slice
> `shift+a` -> Select all points in the layer
>
> And adjust the print messages accordingly?
The current keybinding information in the Preferences for `A` and the message when pressing it does not specify the `shown` behavior. So at a minimum, we should:
- Ensure that the `A` keybinding info indicates it selects *shown* points, as does the message.
Otherwise implementing the other modes is optional? `shown` isn't well documented nor accessible in the GUI.
See also: https://github.com/napari/napari/pull/4833 | 0easy
|
Title: [Feature request] Add apply_to_images to CropNonEmptyMaskIfExists
Body: | 0easy
|
Title: Add IsYearEnd primitive
Body: - This primitive determines the `is_year_end` of a datetime column | 0easy
|
Title: DOC: docstrings in robust.norms, improve, reorganize
Body: The docstrings for methods in robust norm have the formulas in the `Returns` section.
Would be better before Parameter section as extended description.
Also converting to `math` will look better if we can get table formatting of equations (align `for z ...` in piecewise functions)
| 0easy
|
Title: docker-compose build failed. /usr/bin/env: ‘python\r’: No such file or directory
Body: #10 169.0 WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
#10 169.2 /usr/bin/env: ‘python\r’: No such file or directory
------
failed to solve: rpc error: code = Unknown desc = executor failed running [/bin/sh -c if [ "$with_models" = "true" ]; then pip install -e .; ./install_models.py; fi]: exit code: 127 | 0easy
|
Title: Be able to configure a max timeout for long-running reports
Body: The current limit set in the report_hunter is 60 minutes. There should be an option to extend this. | 0easy
|
Title: Assertions unsafe in production code
Body: I've noticed while reading graphene and graphene-django that assertions are used frequently to protect from operations with undefined results or things that should be impossible (but might still happen if used incorrectly).
In Python, assertions are not safe to use for this in general. This is because they can be disabled, and often are in production.
Instead of:
```python
assert something is not None, "Something was unexpectedly None"
```
We should do this:
```python
if something is not None:
raise AssertionError("Something was unexpectedly None")
```
This is the recommended best practice in Python development where assertions are being used. | 0easy
|
Title: Mention log.html if long message is cut
Body: Relevant error messages may be hidden by Robot if the output is verbose. The current warning is `[ Message content over the limit has been removed. ]`, but it's not obvious that Robot does actually preserve the full output in log.html. I think it would be helpful to change the warning to something like `[ Message content over the limit has been removed, see log.html for the full output. ]`
I can make a PR to fix this, just wanted to make an issue first in case there are concerns. | 0easy
|
Title: Support pydantic v1
Body: https://pydanticlogfire.slack.com/archives/C06EDRBSAH3/p1715039582297379
It doesn't have to be rendered nicely, but it shouldn't break. | 0easy
|
Title: Remove the deprecation warning from CrawlerRunner._get_spider_loader()
Body: Added in 1.0.0 (cc4c31e42673813be0739f3a01dfb5d7fb9417cf), I think it's fine to get those runtime errors now. | 0easy
|
Title: HDF5 files are just 1kb,I can not use those weights to optimize my own pictures
Body: OSError: Unable to open file (unable to open file: name = 'D:\Download\image-super-resolution-master\image-super-resolution-master\weights\rdn-C6-D20-G64-G064-x2_enhanced-e219.hdf5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)
Thanks in advance | 0easy
|
Title: Missing libraries @development installed using UV
Body: ### Describe the bug
uv tool install git+https://github.com/openinterpreter/open-interpreter.git@development
running:
```
interpreter
```
missing libraries:
uvicorn fastapi pyautogui
Got it running using:
```
uv tool install git+https://github.com/openinterpreter/open-interpreter.git@development --with uvicorn --with fastapi --with pyautogui
```
### Expected behavior
Install missing libraries by default.
### Open Interpreter version
development branch
### Python version
3.12
### Operating System name and version
Ubuntu 24.04
### Additional context
_No response_ | 0easy
|
Title: Use `ruff` in CI checks
Body: Start using [ruff](https://github.com/astral-sh/ruff) as a substitute to `isort`, `flake8`, and `pyupgrade`.
Consider also enabling listing of jupyter notebooks:
- https://beta.ruff.rs/docs/faq/#does-ruff-support-jupyter-notebooks | 0easy
|
Title: DEV: Use ruff for linting
Body: Use ruff for linting instead of flake8 which is slow in pre-commit checks. | 0easy
|
Title: Disable Imports on SQLite database provider
Body: I don't really want to implement support for migrations on SQLite. Self hosted users should use docker-compose instead of just the Docker image. So I'd rather simply disallow imports if you're using SQLite. | 0easy
|
Title: use_index arg
Body: Originally asked at SO: https://stackoverflow.com/questions/64664210/dont-use-index-in-pandas-profiling/69519138#69519138
I don't have any interest is summarizing the index.
I know I can reset beforehand i.e. `df = df.reset_index(drop=True)` but would we nice to do `ProfileReport(df, use_index=False)` | 0easy
|
Title: Bug: Spinner is not killed when parent thread throws an exception
Body: <!-- Please use the appropriate issue title format:
BUG REPORT
Bug: {Short description of bug}
SUGGESTION
Suggestion: {Short description of suggestion}
OTHER
{Question|Discussion|Whatever}: {Short description} -->
## Description
Currently, the thread is not killed when the parent thread throws an exception. A workaround for now is to catch the exception and stop the spinner manually.
### System settings
- Operating System: Any
- Terminal in use: Any
- Python version: 2.7.14 (actually any)
- Halo version: 0.0.12
- `pip freeze` output: NA
### Error
Spinner keeps spinning when the exception is thrown on parent thread.
### Expected behaviour
The spinner should be stopped when an exception is raised.
## Steps to recreate
```py
# -*- coding: utf-8 -*-
"""Example for doge spinner ;)
"""
from __future__ import unicode_literals
import os
import sys
import time
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath('.'))))
from halo import Halo
spinner = Halo(text='Such Spins', spinner='dots')
spinner.start()
time.sleep(2)
spinner.text = 'Much Colors'
spinner.color = 'magenta'
time.sleep(2)
spinner.text = 'Very emojis'
spinner.spinner = 'hearts'
raise Exception()
time.sleep(2)
spinner.stop_and_persist(symbol='🦄 '.encode('utf-8'), text='Wow!')
```
## People to notify
@ManrajGrover
| 0easy
|
Title: Test factory_boy against Python 3.7
Body: #### The problem
``factory_boy`` should be tested against python 3.7.
#### Extra notes
There might be some issues with configuring Travis-CI with python 3.7.
#### Checklist:
- [x] Add the target to ``tox.ini``
- [ ] Add that target to ``.travis.yml``
- [ ] Update the trove classifiers in ``setup.py``
- [ ] Update the list of supported version in the docs | 0easy
|
Title: Can't type or paste parens or brackets into Text field
Body: Repro is pretty simple. Just create a text field:
```
name = mr.Text(value="John", label="What is your name?", rows=1)
```
And try pasting or type one of the following characters: ) ( ] [ | 0easy
|
Title: AsyncClient.post() got an unexpected keyword argument 'proxies'
Body: It seems that documentation for httpx does not represent fully its current code. The documentation says that ```proxies``` argument to ```AsyncClient.post()``` is available but in reality it's not:
```
AsyncClient.post() got an unexpected keyword argument 'proxies'
```
After removal of ```proxies``` argument from bardapi code everything works correctly.
httpx 0.25.0 | 0easy
|
Title: [ENH] Collection normaliser transformer
Body: ### Describe the feature or idea you want to propose
when writing notebooks, it would have been helpful to have a normaliser BaseCollectionTransformer that could z-normalise, standardise or min/max scale, even as a wrapper of numpy, since it needs to be done on the correct axis etc.
### Describe your proposed solution
something like
```python
from aeon.transformations.collection import Normalise
norm = Normalise()
xt = norm.fit_transform(x)
norm = Normalise(method="min_max")
xt = norm.fit_transform(x)
```
### Describe alternatives you've considered, if relevant
_No response_
### Additional context
_No response_ | 0easy
|
Title: [DOC] International Teacher Data notebook should not have 1st-level markdown beyond title
Body: # Brief Description of Fix
Currently, the example on [Processing International Teacher data](https://pyjanitor.readthedocs.io/notebooks/teacher_pupil.html) has two top-level markdown titles: One for `Processing International Teacher Data`, and the other for `Data Dictionary`.
I would like to propose a change, such that `Data Dictionary` gets demoted to 2nd level. This would prevent it from showing up on the navigation tab of the docs.
# Relevant Context
<!-- Please put here, in bullet points, links to the relevant docs page. A few starting template points are available
to get you started. -->
- [Link to documentation page](https://pyjanitor.readthedocs.io/notebooks/teacher_pupil.html#Data-Dictionary)
- [Link to exact file to be edited](https://github.com/ericmjl/pyjanitor/blob/dev/examples/notebooks/teacher_pupil.ipynb)
| 0easy
|
Title: Agent Input Values Shouldn't Export or Save
Body: Currently the last value that you used in an Agent Input Block exports into the Agent JSON. This means that the Agent file is polluted with the last input the user tested their Agent with.<br><br>*This input value also saves to the database.<br>*<br>Steps to reproduce:
1. Make an Agent with an Agent Input Block.
2. Run the Agent with a recognisable input value
3. Go to the Library Page and Export the Agent to a JSON file.
4. Open the JSON File and observe your unique input has been exported with the file.
---
### Intended Behaviour:
The value field on Agent inputs should be empty when saving or exporting Agents. | 0easy
|
Title: html.inline = False gets the src for javascript files wrong
Body: ### Current Behaviour
setting `profile.config.html.inline = False`
and then `profile.to_file("all_data/longi_report.html")'`
assets are stored in `longi_report_assets/`
however the html file in several places has `src=_assets`
Loading the HTML file gives a broken page
### Expected Behaviour
Correct prefix is used.
### Data Description
N/A
### Code that reproduces the bug
```Python
profile = ProfileReport(data, title="Longitudinal profiling", minimal=True)
profile.config.html.inline = False
profile.to_file("all_data/longi_report.html")
```
### pandas-profiling version
v4.1.2
### Dependencies
```Text
# packages in environment at /gpfs/fs1/home/m/mchakrav/gdevenyi/mambaforge:
#
# Name Version Build Channel
_libgcc_mutex 0.1 conda_forge conda-forge
_openmp_mutex 4.5 2_gnu conda-forge
aiohttp 3.8.4 py310h1fa729e_0 conda-forge
aiosignal 1.3.1 pyhd8ed1ab_0 conda-forge
alsa-lib 1.2.8 h166bdaf_0 conda-forge
aom 3.5.0 h27087fc_0 conda-forge
argcomplete 3.0.5 pyhd8ed1ab_0 conda-forge
arrow-cpp 11.0.0 ha770c72_13_cpu conda-forge
asttokens 2.2.1 pyhd8ed1ab_0 conda-forge
async-timeout 4.0.2 pyhd8ed1ab_0 conda-forge
attr 2.5.1 h166bdaf_1 conda-forge
attrs 22.2.0 pyh71513ae_0 conda-forge
aws-c-auth 0.6.26 hf365957_1 conda-forge
aws-c-cal 0.5.21 h48707d8_2 conda-forge
aws-c-common 0.8.14 h0b41bf4_0 conda-forge
aws-c-compression 0.2.16 h03acc5a_5 conda-forge
aws-c-event-stream 0.2.20 h00877a2_4 conda-forge
aws-c-http 0.7.6 hf342b9f_0 conda-forge
aws-c-io 0.13.19 h5b20300_3 conda-forge
aws-c-mqtt 0.8.6 hc4349f7_12 conda-forge
aws-c-s3 0.2.7 h909e904_1 conda-forge
aws-c-sdkutils 0.1.8 h03acc5a_0 conda-forge
aws-checksums 0.1.14 h03acc5a_5 conda-forge
aws-crt-cpp 0.19.8 hf7fbfca_12 conda-forge
aws-sdk-cpp 1.10.57 h17c43bd_8 conda-forge
backcall 0.2.0 pyh9f0ad1d_0 conda-forge
backports 1.0 pyhd8ed1ab_3 conda-forge
backports.functools_lru_cache 1.6.4 pyhd8ed1ab_0 conda-forge
backports.zoneinfo 0.2.1 py310hff52083_7 conda-forge
boltons 23.0.0 pyhd8ed1ab_0 conda-forge
brotli 1.0.9 h166bdaf_8 conda-forge
brotli-bin 1.0.9 h166bdaf_8 conda-forge
brotlipy 0.7.0 py310h5764c6d_1005 conda-forge
bzip2 1.0.8 h7f98852_4 conda-forge
c-ares 1.18.1 h7f98852_0 conda-forge
ca-certificates 2023.5.7 hbcca054_0 conda-forge
cairo 1.16.0 ha61ee94_1014 conda-forge
certifi 2023.5.7 pyhd8ed1ab_0 conda-forge
cffi 1.15.1 py310h255011f_3 conda-forge
charset-normalizer 2.1.1 pyhd8ed1ab_0 conda-forge
click 8.1.3 unix_pyhd8ed1ab_2 conda-forge
colorama 0.4.6 pyhd8ed1ab_0 conda-forge
comm 0.1.3 pyhd8ed1ab_0 conda-forge
conda 23.3.1 py310hff52083_0 conda-forge
conda-package-handling 2.0.2 pyh38be061_0 conda-forge
conda-package-streaming 0.7.0 pyhd8ed1ab_1 conda-forge
contourpy 1.0.7 py310hdf3cbec_0 conda-forge
cryptography 40.0.1 py310h34c0648_0 conda-forge
curl 7.88.1 hdc1c0ab_1 conda-forge
cycler 0.11.0 pyhd8ed1ab_0 conda-forge
dbus 1.13.6 h5008d03_3 conda-forge
debugpy 1.6.7 py310heca2aa9_0 conda-forge
decorator 5.1.1 pyhd8ed1ab_0 conda-forge
double-conversion 3.2.0 h27087fc_1 conda-forge
eigen 3.4.0 h4bd325d_0 conda-forge
executing 1.2.0 pyhd8ed1ab_0 conda-forge
expat 2.5.0 hcb278e6_1 conda-forge
ffmpeg 5.1.2 gpl_h8dda1f0_106 conda-forge
fftw 3.3.10 nompi_hf0379b8_106 conda-forge
fmt 9.1.0 h924138e_0 conda-forge
font-ttf-dejavu-sans-mono 2.37 hab24e00_0 conda-forge
font-ttf-inconsolata 3.000 h77eed37_0 conda-forge
font-ttf-source-code-pro 2.038 h77eed37_0 conda-forge
font-ttf-ubuntu 0.83 hab24e00_0 conda-forge
fontconfig 2.14.2 h14ed4e7_0 conda-forge
fonts-conda-ecosystem 1 0 conda-forge
fonts-conda-forge 1 0 conda-forge
fonttools 4.39.3 py310h1fa729e_0 conda-forge
freetype 2.12.1 hca18f0e_1 conda-forge
frozenlist 1.3.3 py310h5764c6d_0 conda-forge
gettext 0.21.1 h27087fc_0 conda-forge
gflags 2.2.2 he1b5a44_1004 conda-forge
gl2ps 1.4.2 h0708190_0 conda-forge
glew 2.1.0 h9c3ff4c_2 conda-forge
glib 2.74.1 h6239696_1 conda-forge
glib-tools 2.74.1 h6239696_1 conda-forge
glog 0.6.0 h6f12383_0 conda-forge
gmp 6.2.1 h58526e2_0 conda-forge
gnutls 3.7.8 hf3e180e_0 conda-forge
graphite2 1.3.13 h58526e2_1001 conda-forge
gst-plugins-base 1.22.0 h4243ec0_2 conda-forge
gstreamer 1.22.0 h25f0c4b_2 conda-forge
gstreamer-orc 0.4.33 h166bdaf_0 conda-forge
harfbuzz 6.0.0 h8e241bc_0 conda-forge
hdf4 4.2.15 h9772cbc_5 conda-forge
hdf5 1.12.2 nompi_h4df4325_101 conda-forge
htmlmin 0.1.12 py_1 conda-forge
icu 70.1 h27087fc_0 conda-forge
idna 3.4 pyhd8ed1ab_0 conda-forge
imagehash 4.3.1 pyhd8ed1ab_0 conda-forge
importlib-metadata 6.1.0 pyha770c72_0 conda-forge
importlib_metadata 6.1.0 hd8ed1ab_0 conda-forge
importlib_resources 5.12.0 pyhd8ed1ab_0 conda-forge
ipykernel 6.23.1 pyh210e3f2_0 conda-forge
ipython 8.12.0 pyh41d4057_0 conda-forge
ipywidgets 8.0.6 pyhd8ed1ab_0 conda-forge
itk 5.3.0 py310hfdc917e_0 conda-forge
itk-core 5.3.0 pypi_0 pypi
itk-filtering 5.3.0 pypi_0 pypi
itk-numerics 5.3.0 pypi_0 pypi
itk-registration 5.3.0 pypi_0 pypi
itk-segmentation 5.3.0 pypi_0 pypi
jack 1.9.22 h11f4161_0 conda-forge
jedi 0.18.2 pyhd8ed1ab_0 conda-forge
jinja2 3.1.2 pyhd8ed1ab_1 conda-forge
joblib 1.2.0 pyhd8ed1ab_0 conda-forge
jpeg 9e h0b41bf4_3 conda-forge
jsoncpp 1.9.5 h4bd325d_1 conda-forge
jsonpatch 1.32 pyhd8ed1ab_0 conda-forge
jsonpointer 2.0 py_0 conda-forge
jupyter_client 8.2.0 pyhd8ed1ab_0 conda-forge
jupyter_core 5.3.0 py310hff52083_0 conda-forge
jupyterlab_widgets 3.0.7 pyhd8ed1ab_1 conda-forge
keyutils 1.6.1 h166bdaf_0 conda-forge
kiwisolver 1.4.4 py310hbf28c38_1 conda-forge
krb5 1.20.1 h81ceb04_0 conda-forge
lame 3.100 h166bdaf_1003 conda-forge
lcms2 2.14 h6ed2654_0 conda-forge
ld_impl_linux-64 2.40 h41732ed_0 conda-forge
lerc 4.0.0 h27087fc_0 conda-forge
libabseil 20230125.0 cxx17_hcb278e6_1 conda-forge
libaec 1.0.6 hcb278e6_1 conda-forge
libarchive 3.6.2 h3d51595_0 conda-forge
libarrow 11.0.0 h93537a5_13_cpu conda-forge
libblas 3.9.0 16_linux64_openblas conda-forge
libbrotlicommon 1.0.9 h166bdaf_8 conda-forge
libbrotlidec 1.0.9 h166bdaf_8 conda-forge
libbrotlienc 1.0.9 h166bdaf_8 conda-forge
libcap 2.67 he9d0100_0 conda-forge
libcblas 3.9.0 16_linux64_openblas conda-forge
libclang 15.0.7 default_had23c3d_1 conda-forge
libclang13 15.0.7 default_h3e3d535_1 conda-forge
libcrc32c 1.1.2 h9c3ff4c_0 conda-forge
libcups 2.3.3 h36d4200_3 conda-forge
libcurl 7.88.1 hdc1c0ab_1 conda-forge
libdb 6.2.32 h9c3ff4c_0 conda-forge
libdeflate 1.14 h166bdaf_0 conda-forge
libdrm 2.4.114 h166bdaf_0 conda-forge
libedit 3.1.20191231 he28a2e2_2 conda-forge
libev 4.33 h516909a_1 conda-forge
libevent 2.1.10 h28343ad_4 conda-forge
libexpat 2.5.0 hcb278e6_1 conda-forge
libffi 3.4.2 h7f98852_5 conda-forge
libflac 1.4.2 h27087fc_0 conda-forge
libgcc-ng 12.2.0 h65d4601_19 conda-forge
libgcrypt 1.10.1 h166bdaf_0 conda-forge
libgfortran-ng 12.2.0 h69a702a_19 conda-forge
libgfortran5 12.2.0 h337968e_19 conda-forge
libglib 2.74.1 h606061b_1 conda-forge
libglu 9.0.0 he1b5a44_1001 conda-forge
libgomp 12.2.0 h65d4601_19 conda-forge
libgoogle-cloud 2.8.0 h0bc5f78_1 conda-forge
libgpg-error 1.46 h620e276_0 conda-forge
libgrpc 1.52.1 hcf146ea_1 conda-forge
libhwloc 2.9.0 hd6dc26d_0 conda-forge
libiconv 1.17 h166bdaf_0 conda-forge
libidn2 2.3.4 h166bdaf_0 conda-forge
libitk 5.3.0 hcedbc38_0 conda-forge
liblapack 3.9.0 16_linux64_openblas conda-forge
libllvm15 15.0.7 hadd5161_1 conda-forge
libmamba 1.4.1 hcea66bb_0 conda-forge
libmambapy 1.4.1 py310h1428755_0 conda-forge
libnetcdf 4.8.1 nompi_h261ec11_106 conda-forge
libnghttp2 1.52.0 h61bc06f_0 conda-forge
libnsl 2.0.0 h7f98852_0 conda-forge
libnuma 2.0.16 h0b41bf4_1 conda-forge
libogg 1.3.4 h7f98852_1 conda-forge
libopenblas 0.3.21 pthreads_h78a6416_3 conda-forge
libopus 1.3.1 h7f98852_1 conda-forge
libpciaccess 0.17 h166bdaf_0 conda-forge
libpng 1.6.39 h753d276_0 conda-forge
libpq 15.2 hb675445_0 conda-forge
libprotobuf 3.21.12 h3eb15da_0 conda-forge
libsndfile 1.2.0 hb75c966_0 conda-forge
libsodium 1.0.18 h36c2ea0_1 conda-forge
libsolv 0.7.23 h3eb15da_0 conda-forge
libsqlite 3.40.0 h753d276_0 conda-forge
libssh2 1.10.0 hf14f497_3 conda-forge
libstdcxx-ng 12.2.0 h46fd767_19 conda-forge
libsystemd0 253 h8c4010b_1 conda-forge
libtasn1 4.19.0 h166bdaf_0 conda-forge
libtheora 1.1.1 h7f98852_1005 conda-forge
libthrift 0.18.1 h5e4af38_0 conda-forge
libtiff 4.4.0 h82bc61c_5 conda-forge
libtool 2.4.7 h27087fc_0 conda-forge
libudev1 253 h0b41bf4_1 conda-forge
libunistring 0.9.10 h7f98852_0 conda-forge
libutf8proc 2.8.0 h166bdaf_0 conda-forge
libuuid 2.38.1 h0b41bf4_0 conda-forge
libva 2.18.0 h0b41bf4_0 conda-forge
libvorbis 1.3.7 h9c3ff4c_0 conda-forge
libvpx 1.11.0 h9c3ff4c_3 conda-forge
libwebp-base 1.3.0 h0b41bf4_0 conda-forge
libxcb 1.13 h7f98852_1004 conda-forge
libxkbcommon 1.5.0 h79f4944_1 conda-forge
libxml2 2.10.3 hca2bb57_4 conda-forge
libzip 1.9.2 hc929e4a_1 conda-forge
libzlib 1.2.13 h166bdaf_4 conda-forge
loguru 0.6.0 py310hff52083_2 conda-forge
lz4-c 1.9.4 hcb278e6_0 conda-forge
lzo 2.10 h516909a_1000 conda-forge
mamba 1.4.1 py310h51d5547_0 conda-forge
markupsafe 2.1.2 py310h1fa729e_0 conda-forge
matplotlib-base 3.6.3 py310he60537e_0 conda-forge
matplotlib-inline 0.1.6 pyhd8ed1ab_0 conda-forge
mizani 0.8.1 pyhd8ed1ab_1 conda-forge
mpg123 1.31.3 hcb278e6_0 conda-forge
multidict 6.0.4 py310h1fa729e_0 conda-forge
multimethod 1.4 py_0 conda-forge
munkres 1.1.4 pyh9f0ad1d_0 conda-forge
mysql-common 8.0.32 ha901b37_1 conda-forge
mysql-libs 8.0.32 hd7da12d_1 conda-forge
ncurses 6.3 h27087fc_1 conda-forge
nest-asyncio 1.5.6 pyhd8ed1ab_0 conda-forge
nettle 3.8.1 hc379101_1 conda-forge
networkx 3.1 pyhd8ed1ab_0 conda-forge
nlohmann_json 3.11.2 h27087fc_0 conda-forge
nspr 4.35 h27087fc_0 conda-forge
nss 3.89 he45b914_0 conda-forge
numpy 1.23.5 py310h53a5b5f_0 conda-forge
openh264 2.3.1 hcb278e6_2 conda-forge
openjpeg 2.5.0 h7d73246_1 conda-forge
openssl 3.1.0 hd590300_3 conda-forge
orc 1.8.3 hfdbbad2_0 conda-forge
p11-kit 0.24.1 hc5aa10d_0 conda-forge
packaging 23.0 pyhd8ed1ab_0 conda-forge
palettable 3.3.0 py_0 conda-forge
pandas 1.5.3 py310h9b08913_1 conda-forge
parquet-cpp 1.5.1 2 conda-forge
parso 0.8.3 pyhd8ed1ab_0 conda-forge
patsy 0.5.3 pyhd8ed1ab_0 conda-forge
pcre2 10.40 hc3806b6_0 conda-forge
pexpect 4.8.0 pyh1a96a4e_2 conda-forge
phik 0.12.3 py310h7270e96_0 conda-forge
pickleshare 0.7.5 py_1003 conda-forge
pillow 9.2.0 py310h454ad03_3 conda-forge
pip 23.0.1 pyhd8ed1ab_0 conda-forge
pipx 1.2.0 pyhd8ed1ab_0 conda-forge
pixman 0.40.0 h36c2ea0_0 conda-forge
platformdirs 3.2.0 pyhd8ed1ab_0 conda-forge
plotnine 0.10.1 pyhd8ed1ab_2 conda-forge
pluggy 1.0.0 pyhd8ed1ab_5 conda-forge
polars 0.17.13 py310hcb5633a_0 conda-forge
pooch 1.7.0 pyha770c72_3 conda-forge
proj 9.1.0 h93bde94_0 conda-forge
prompt-toolkit 3.0.38 pyha770c72_0 conda-forge
prompt_toolkit 3.0.38 hd8ed1ab_0 conda-forge
psutil 5.9.5 py310h1fa729e_0 conda-forge
pthread-stubs 0.4 h36c2ea0_1001 conda-forge
ptyprocess 0.7.0 pyhd3deb0d_0 conda-forge
pugixml 1.11.4 h9c3ff4c_0 conda-forge
pulseaudio 16.1 hcb278e6_3 conda-forge
pulseaudio-client 16.1 h5195f5e_3 conda-forge
pulseaudio-daemon 16.1 ha8d29e2_3 conda-forge
pure_eval 0.2.2 pyhd8ed1ab_0 conda-forge
pyarrow 11.0.0 py310h633f555_13_cpu conda-forge
pybind11-abi 4 hd8ed1ab_3 conda-forge
pycosat 0.6.4 py310h5764c6d_1 conda-forge
pycparser 2.21 pyhd8ed1ab_0 conda-forge
pydantic 1.10.7 py310h1fa729e_0 conda-forge
pygments 2.14.0 pyhd8ed1ab_0 conda-forge
pyopenssl 23.1.1 pyhd8ed1ab_0 conda-forge
pyparsing 3.0.9 pyhd8ed1ab_0 conda-forge
pysocks 1.7.1 pyha2e5f31_6 conda-forge
python 3.10.10 he550d4f_0_cpython conda-forge
python-dateutil 2.8.2 pyhd8ed1ab_0 conda-forge
python-tzdata 2023.3 pyhd8ed1ab_0 conda-forge
python_abi 3.10 3_cp310 conda-forge
pytz 2023.3 pyhd8ed1ab_0 conda-forge
pywavelets 1.4.1 py310h0a54255_0 conda-forge
pyyaml 6.0 py310h5764c6d_5 conda-forge
pyzmq 25.0.2 py310h059b190_0 conda-forge
qt-main 5.15.8 h5d23da1_6 conda-forge
re2 2023.02.02 hcb278e6_0 conda-forge
readline 8.2 h8228510_1 conda-forge
reproc 14.2.4 h0b41bf4_0 conda-forge
reproc-cpp 14.2.4 hcb278e6_0 conda-forge
requests 2.28.2 pyhd8ed1ab_1 conda-forge
ruamel.yaml 0.17.21 py310h1fa729e_3 conda-forge
ruamel.yaml.clib 0.2.7 py310h1fa729e_1 conda-forge
s2n 1.3.41 h3358134_0 conda-forge
scikit-learn 1.2.2 py310h41b6a48_1 conda-forge
scipy 1.9.3 py310hdfbd76f_2 conda-forge
seaborn-base 0.12.2 pyhd8ed1ab_0 conda-forge
setuptools 67.6.1 pyhd8ed1ab_0 conda-forge
simpleitk 2.2.1 py310h2b9ea3a_1 conda-forge
six 1.16.0 pyh6c4a22f_0 conda-forge
snappy 1.1.10 h9fff704_0 conda-forge
sqlite 3.40.0 h4ff8645_0 conda-forge
stack_data 0.6.2 pyhd8ed1ab_0 conda-forge
statadict 1.1.0 pypi_0 pypi
statsmodels 0.13.5 py310hde88566_2 conda-forge
svt-av1 1.4.1 hcb278e6_0 conda-forge
sweetviz 2.1.4 pyhd8ed1ab_0 conda-forge
tangled-up-in-unicode 0.2.0 pyhd8ed1ab_0 conda-forge
tbb 2021.8.0 hf52228f_0 conda-forge
tbb-devel 2021.8.0 hf52228f_0 conda-forge
threadpoolctl 3.1.0 pyh8a188c0_0 conda-forge
tk 8.6.12 h27826a3_0 conda-forge
toolz 0.12.0 pyhd8ed1ab_0 conda-forge
tornado 6.3.2 py310h2372a71_0 conda-forge
tqdm 4.64.1 pyhd8ed1ab_0 conda-forge
traitlets 5.9.0 pyhd8ed1ab_0 conda-forge
typeguard 2.13.3 pyhd8ed1ab_0 conda-forge
typing-extensions 4.5.0 hd8ed1ab_0 conda-forge
typing_extensions 4.5.0 pyha770c72_0 conda-forge
tzdata 2023c h71feb2d_0 conda-forge
ucx 1.14.0 ha0ee010_0 conda-forge
unicodedata2 15.0.0 py310h5764c6d_0 conda-forge
urllib3 1.26.15 pyhd8ed1ab_0 conda-forge
userpath 1.7.0 pyhd8ed1ab_0 conda-forge
utfcpp 3.2.3 ha770c72_0 conda-forge
visions 0.7.5 pyhd8ed1ab_0 conda-forge
vtk 9.2.5 qt_py310hc895abb_200 conda-forge
wcwidth 0.2.6 pyhd8ed1ab_0 conda-forge
wheel 0.40.0 pyhd8ed1ab_0 conda-forge
widgetsnbextension 4.0.7 pyhd8ed1ab_0 conda-forge
wslink 1.10.1 pyhd8ed1ab_0 conda-forge
x264 1!164.3095 h166bdaf_2 conda-forge
x265 3.5 h924138e_3 conda-forge
xcb-util 0.4.0 h166bdaf_0 conda-forge
xcb-util-image 0.4.0 h166bdaf_0 conda-forge
xcb-util-keysyms 0.4.0 h166bdaf_0 conda-forge
xcb-util-renderutil 0.3.9 h166bdaf_0 conda-forge
xcb-util-wm 0.4.1 h166bdaf_0 conda-forge
xkeyboard-config 2.38 h0b41bf4_0 conda-forge
xorg-fixesproto 5.0 h7f98852_1002 conda-forge
xorg-kbproto 1.0.7 h7f98852_1002 conda-forge
xorg-libice 1.0.10 h7f98852_0 conda-forge
xorg-libsm 1.2.3 hd9c2040_1000 conda-forge
xorg-libx11 1.8.4 h0b41bf4_0 conda-forge
xorg-libxau 1.0.9 h7f98852_0 conda-forge
xorg-libxdmcp 1.1.3 h7f98852_0 conda-forge
xorg-libxext 1.3.4 h0b41bf4_2 conda-forge
xorg-libxfixes 5.0.3 h7f98852_1004 conda-forge
xorg-libxrender 0.9.10 h7f98852_1003 conda-forge
xorg-libxt 1.2.1 h7f98852_2 conda-forge
xorg-renderproto 0.11.1 h7f98852_1002 conda-forge
xorg-xextproto 7.3.0 h0b41bf4_1003 conda-forge
xorg-xproto 7.0.31 h7f98852_1007 conda-forge
xz 5.2.6 h166bdaf_0 conda-forge
yaml 0.2.5 h7f98852_2 conda-forge
yaml-cpp 0.7.0 h27087fc_2 conda-forge
yarl 1.8.2 py310h5764c6d_0 conda-forge
ydata-profiling 4.1.2 pyhd8ed1ab_0 conda-forge
zeromq 4.3.4 h9c3ff4c_1 conda-forge
zipp 3.15.0 pyhd8ed1ab_0 conda-forge
zlib 1.2.13 h166bdaf_4 conda-forge
zstandard 0.19.0 py310hdeb6495_1 conda-forge
zstd 1.5.2 h3eb15da_6 conda-forge
```
### OS
Ubuntu 22.04
### Checklist
- [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [X] The issue has not been resolved by the entries listed under [Common Issues](https://pandas-profiling.ydata.ai/docs/master/pages/support_contrib/common_issues.html). | 0easy
|
Title: [BUG] Memory growth when using PyGWalker with Streamlit
Body: **Describe the bug**
I observe RAM growth when using PyGWalker with Streamlit framework. The RAM usage constantly grow on page reload (on every app run). When using Streamlit without PyGWalker, RAM usage remain constant (flat, does not grow). It seems like memory is never released, this was observed indirectly (we tracked growth locally, see reproduction below, but we also observe same issue in Azure web app and RAM usage never decline).
**To Reproduce**
We tracked down the issue with isolated Streamlit app with PyGwalker and memory profile (run with `python -m streamlit run app.py`):
```
# app.py
import numpy as np
np.random.seed(seed=1)
import pandas as pd
from memory_profiler import profile
from pygwalker.api.streamlit import StreamlitRenderer
@profile
def app():
# Create random dataframe
df = pd.DataFrame(np.random.randint(0, 100, size=(100, 4)), columns=list("ABCD"))
render = StreamlitRenderer(df)
render.explorer()
app()
```
Observed output for a few consequent reloads from browser (press `R`, rerun):
```
Line # Mem usage Increment Occurrences Line Contents
13 302.6 MiB 23.3 MiB 1 render.explorer()
13 315.4 MiB 23.3 MiB 1 render.explorer()
13 325.8 MiB 23.3 MiB 1 render.explorer()
```
**Expected behavior**
RAM usage to remain at constant level between app reruns.
**Screenshots**
On screenshot you may observe a user activity peaks (cause CPU usage) and growing RAM usage (memory set).

On this screenshot a debug app memory profiling is displayed.

**Versions**
streamlit 1.38.0
pygwalker 0.4.9.3
memory_profiler (latest)
python 3.9.10
browser: chrome 128.0.6613.138 (Official Build) (64-bit)
Tested locally on Windows 11
*Thanks for support!* | 0easy
|
Title: [Tracker] Move functions that are not tests from `tests` to `testing`
Body: We have a bunch of helper functions that are aside from the tests which should be moved to `testing`. The general rule is, if the code isn't a test, or something related to pytest (fixture, some config, etc) it should be moved to the `testing` directory.
This issue will work as a tracker to map some of these functions, feel free to help us find others and/or help us move them to `testing`.
The goal here is to reorganize our test suite. If you have any questions, you can call us and/or ask on our Slack page too [](https://join.slack.com/t/kornia/shared_invite/zt-csobk21g-2AQRi~X9Uu6PLMuUZdvfjA)
_Originally posted by @johnnv1 in https://github.com/kornia/kornia/pull/2745#discussion_r1461253082_
--------
You can choose anyone below, or find new ones. You don't have to worry about separating all the tests in a file into a single PR
- [ ] Hardone, maybe needs to reimplement part of it - https://github.com/kornia/kornia/blob/ce434e467faf617604bb3383cf78cd0b79f59dbd/tests/augmentation/test_augmentation.py#L64
- [ ] nerf - here are one of the functions I have found around nerf tests, maybe have more, is valid to re-check it :)
- https://github.com/kornia/kornia/blob/ce434e467faf617604bb3383cf78cd0b79f59dbd/tests/nerf/test_rays.py#L20
- https://github.com/kornia/kornia/blob/ce434e467faf617604bb3383cf78cd0b79f59dbd/tests/nerf/test_rays.py#L69
- https://github.com/kornia/kornia/blob/ce434e467faf617604bb3383cf78cd0b79f59dbd/tests/nerf/test_rays.py#L81
- https://github.com/kornia/kornia/blob/ce434e467faf617604bb3383cf78cd0b79f59dbd/tests/nerf/test_rays.py#L115
- https://github.com/kornia/kornia/blob/ce434e467faf617604bb3383cf78cd0b79f59dbd/tests/nerf/test_rays.py#L140
- https://github.com/kornia/kornia/blob/ce434e467faf617604bb3383cf78cd0b79f59dbd/tests/nerf/test_data_utils.py#L14
- https://github.com/kornia/kornia/blob/ce434e467faf617604bb3383cf78cd0b79f59dbd/tests/nerf/test_data_utils.py#L24
| 0easy
|
Title: Create gifs with folder structure after AutoML training
Body: Create gifs with folder structure after AutoML training and add them to the Readme | 0easy
|
Title: create a demo with vizzu
Body: The vizzu is a package for creating animated charts in Jupyter notebook.
Please create a demo app/notebook with vizzu in Mercury. | 0easy
|
Title: The “reStructuredText syntax error” check doesn’t trigger when the double backquote is not separated by non-word characters.
Body: ### Describe the issue
The “reStructuredText syntax error” check doesn’t trigger when the reStructuredText markup for the inline literal (` `` `) is not separated from the surrounding text by non-word characters.
### I already tried
- [x] I've read and searched [the documentation](https://docs.weblate.org/).
- [x] I've searched for similar filed issues in this repository.
### Steps to reproduce the behavior
1. Go to https://hosted.weblate.org/translate/weblate/documentation/zh_Hant/?checksum=94a74a30ad46aad6
### Expected behavior
The “reStructuredText syntax error” check should be triggered on the example below because a following space for the first ` ``foo / bar`` ` and a preceding space for the second ` ``foo / bar`` ` are required for normal rendering of the inline literal.
### Screenshots
https://hosted.weblate.org/translate/weblate/documentation/zh_Hant/?checksum=94a74a30ad46aad6

https://docs.weblate.org/zh-tw/weblate-5.10.2/admin/access.html#id4

### Exception traceback
```pytb
```
### How do you run Weblate?
weblate.org service
### Weblate versions
Weblate 5.10.3-dev
### Weblate deploy checks
```shell
```
### Additional context
_No response_ | 0easy
|
Title: DOC: Dataframe.from_records should not say that passing in a DataFrame for data is allowed
Body: ### Pandas version checks
- [X] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://pandas.pydata.org/docs/dev/reference/api/pandas.DataFrame.from_records.html#pandas.DataFrame.from_records
### Documentation problem
The first text in the docstring says (emphasis at the end is mine)
> Convert structured or record ndarray to DataFrame.
>
> Creates a DataFrame object from a structured ndarray, sequence of
> tuples or dicts, or **DataFrame**.
However, starting in 2.1.0, passing in a DataFrame has been deprecated. In 2.1.0 it would raise a FutureWarning; in main it will raise a TyperError.
The documentation between 2.1.0 and main appear to have been updated to remove text in the Parameters section of the docstring that still said a DataFrame could be passed in for data, but the text in the initial section of the docstring was not.
### Suggested fix for documentation
Change the initial docstring text to be:
> Convert structured or record ndarray to DataFrame.
>
> Creates a DataFrame object from a structured ndarray or sequence of
> tuples or dicts. | 0easy
|
Title: What is the output of the Example Codes?
Body: This is not answered properly and hence it is a difficulty to give a fix for Windows 10. Not all the codes will work for Windows 10.
If code behaviour is properly mentioned then people can try to give a fix for the same. | 0easy
|
Title: Jurik's DMX and CFB indicators
Body: **Which version are you running? The lastest version is on Github. Pip is for major releases.**
0.3.14b0
**Describe the solution you'd like**
I would like have following indicators on pandas_ta repository:
* Jurik's DMX
* Jurikäs CFB
DMX indicator is made for MT4 and the code for it can be found here (https://forex-station.com/viewtopic.php?p=1295447361#p1295447361)
CFB indicator is made for MT4 and the code for it can be found here (https://forex-station.com/viewtopic.php?p=1295448318&sid=6fbdfff0f5af01dc47bc41792c2aaede#p1295448318)
| 0easy
|
Title: Add option to add non-existing snapshots instead of raising an error
Body: **Is your feature request related to a problem? Please describe.**
The issue is that I have to run `--snapshot-update` in order to initiate a new assertion in a test. This is mostly an annoyance when testing with PyCharm using the IDE testing tools.
**Describe the solution you'd like**
I would prefer that there is an option that I can set by default that will not update any existing snapshots, but will add any newly created snapshots.
**Describe alternatives you've considered**
A flag called `create-non-existing-snapshots` as an option that can be set. This could then be configured in CLI or an IDE.
**Additional context**
None | 0easy
|
Title: Feature Request: Interact with new GH projects as for classic ones
Body: The way github3 allows to interact with classic projects is just great: unfortunately those will be discontinued this August, i.e. in a month.
It would be very useful to have a way to interact with new GH projects, which are now the standard, ideally with an API which is as similar as possible to the current one for classic projects.
I apologise in advance if this feature already exists and I missed it.
Thanks a lot for considering this upgrade of github3! | 0easy
|
Title: Hilbert Transform - Instantaneous Trendline
Body: Version: 0.3.14b0
Please add overlap: Hilbert Transform - Instantaneous Trendline
| 0easy
|
Title: docs: add docs for the dict() method
Body: # Context
we need to document the behavior of calling `dict()` on a BaseDoc. Especially with nested DocList.
```python
class InnerDoc(BaseDoc):
x: int
class MyDoc(BaseDoc):
l: DocList[InnerDoc]
doc = MyDoc(l=[InnerDoc(x=1), InnerDoc(x=2)])
print(f'{doc.dict()=}')
#> doc_view.dict()={'id': 'ab65fc52a60d1da1ce6196c100943e1f', 'l': [{'id': 'e218c3a6904cbbd0f48ccd10130c9f78', 'x': 1}, {'id': 'e30bf3d613bf5c3e805faf0c0dc0f704', 'x': 2}]}
```
and we need to explain how using `dict(doc)` is different from using `doc.dict()` | 0easy
|
Title: Docs - Clean up troubleshooting, make links to how it all works
Body: Note to self
| 0easy
|
Title: Windows Install error - ValueError: path 'resources/' cannot end with '/
Body: https://github.com/capitalone/DataProfiler/blob/5b04b7fe5ee3556235c397efb69b32cd5d364a3b/setup.py#L33
Ran into an install isue
ValueError: path 'resources/' cannot end with '/
As per
https://stackoverflow.com/questions/20356482/valueerror-path-conf-cannot-end-with
resource_dir = "resources/"
needs to change to
resource_dir = "resources"
Thank you. | 0easy
|
Title: Extensive benchmarking of reasoning models including variance
Body: In their [R1 repo](https://github.com/deepseek-ai/DeepSeek-R1) deepseek people recommend to estimate PASS@1 by asking the same question various times.
We implemented that into our [Reasoning benchmark](https://github.com/sgl-project/sglang/tree/main/benchmark/reasoning_benchmark).
Additionaly to the averaged accuracy we report also average standard error as a measurement of uncertainty.
Ideally that would include some plots.
Now we want to perform experiments how the results change under
* repeated experiment with same hyperparameters
* increased number of trials
I think the AIME 2024 is suited to this experiment because LIMO is quiet large and it will take long time to run experiments with a large number of trials.
Please see recently [merged](https://github.com/sgl-project/sglang/pull/3677#issuecomment-2669506253) brach that includes measurement of uncertainty in reasoning models answers for more details and detailed explanation of the metrics.
Feel free to reach out to me if you have further questions.
| 0easy
|
Title: Example of inference with open_clip
Body: Would be nice to provide an example of loading a model and performing inference on a single example. | 0easy
|
Title: Document that `Get Variable Value` and `Variable Should (Not) Exist` do not support named-argument syntax
Body: Hi,
I noticed that I am not able to pass `default` argument for `Get Variable Value` from `BuiltIn` as keyword argument. I.e. the following does not provide expected result: `${result} BuiltIn.Get Variable Value ${MY_VARIABLE} default=My Default`
One would expect `${result}` to have `My Default` as return if `${MY_VARIABLE}` does not exist. Instead `default=My Default` is returned.

I didn't find any notion of this functionality anywhere and if the keyword works as expected, I would assume that this was mentioned in the documentation.
This was tested with version 7.0. | 0easy
|
Title: [BUG] Video generation fails when creating subtitles locally.
Body: **Describe the bug**
Video generation fails after MoviePy finishes. The next debug message is "Creating Subtitles Locally" and then comes an error: "Expected contiguous start of match or end of input at char 198, but started at char 483 (unmatched content: '2\n0:00:10,780 --> 0:00\nIn this video, we will explore the fascinating world of cars, from their history and evolution to their impact on society and the environment.\n\n1\n\n3\n0:00 --> 0:00:25,760\nEvolution of Cars:\n\nCars have come a long way since their invention in the late 19th century')"
The request returns an error and no output.mp4 is generated
**To Reproduce**
Steps to reproduce the behavior:
1. Cloned git repository
2. Built MagickImage from linux source and installed on Ubuntu 22.04
3. Updated .env file per ENV.md. ImageMagick is pointed to /usr/bin/local/magick where it was installed by make install
4. Run main.py for the backend as suggested
5. Send a JSON request to the backend endpoint from Postman on a remote machine:
{
"videoSubject": "Cars",
"voice": "en_au_001"
}
**Expected behavior**
Video to be generated at ./temp/output.mp4
**Observed behavior**
The video clips are downloaded and audio is generated into ./temp/, but the request errors and stops before the video is composited.
| 0easy
|
Title: Add new step to `piccolo asgi new` for media storage
Body: When running the `piccolo asgi new` command to create a new app, it asks you some questions:
<img width="535" alt="Screenshot 2022-08-27 at 15 22 14" src="https://user-images.githubusercontent.com/350976/187034384-776b9be0-c4a4-490a-b464-c2bd73ae3be0.png">
It would be good if there was one more question - 'Use S3 for media storage?'. It's a fairly simple modification to [this file](https://github.com/piccolo-orm/piccolo/blob/master/piccolo/apps/asgi/commands/new.py).
If they answer 'y' to 'Use S3?' we will change the following line to `piccolo_admin[s3]`:
https://github.com/piccolo-orm/piccolo/blob/305819612bb935cef25804667b223478f6ad71b7/piccolo/apps/asgi/commands/templates/app/requirements.txt.jinja#L5
In the future we may add a ``media.py`` to the template showing how S3 is configured, but that can come in the future.
| 0easy
|
Title: Better default height for metrics property
Body: ### Description
The height of a metric is taking more than half the available height of my page. This will never be a correct value. (I would also argue the same for the width) A new default height should be defined.
```python
from taipy.gui import Gui
import taipy.gui.builder as tgb
value = 50
with tgb.Page() as page:
tgb.metric("{value}", type="none")
Gui(page=page).run(title="Frontend Demo")
```

### Acceptance Criteria
- [ ] Any new code is covered by a unit tested.
- [ ] Check code coverage is at least 90%.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional) | 0easy
|
Title: Poor docstring formatting in vscode
Body: Here's what it looks like for getting the docstring within `@callback`. Lots of white space and odd newline breaks. Could be nicer!
<img width="730" alt="image" src="https://github.com/plotly/dash/assets/1280389/288da817-8db5-40a2-9be3-36b00c867e32">
| 0easy
|
Title: 没开源前端代码吗 想学习一下
Body: | 0easy
|
Title: `LOGFIRE_SEND_TO_LOGFIRE` env var doesn't accept `if-token-present` as a value
Body: ### Description
It would be very useful if it did, also more consistent with `configure()` kwargs
### Python, Logfire & OS Versions, related packages (not required)
```TOML
logfire="0.53.0"
platform="macOS-14.6.1-arm64-arm-64bit"
python="3.12.2 (main, Feb 25 2024, 03:55:42) [Clang 17.0.6 ]"
[related_packages]
requests="2.32.3"
pydantic="2.9.2"
fastapi="0.115.0"
openai="1.47.1"
protobuf="4.25.5"
rich="13.8.1"
executing="2.1.0"
opentelemetry-api="1.27.0"
opentelemetry-exporter-otlp-proto-common="1.27.0"
opentelemetry-exporter-otlp-proto-http="1.27.0"
opentelemetry-instrumentation="0.48b0"
opentelemetry-instrumentation-asgi="0.48b0"
opentelemetry-instrumentation-asyncpg="0.48b0"
opentelemetry-instrumentation-fastapi="0.48b0"
opentelemetry-instrumentation-httpx="0.48b0"
opentelemetry-instrumentation-system-metrics="0.48b0"
opentelemetry-proto="1.27.0"
opentelemetry-sdk="1.27.0"
opentelemetry-semantic-conventions="0.48b0"
opentelemetry-util-http="0.48b0"
```
| 0easy
|
Title: Inability for automatic token refresh when using `grant_type=password`
Body: **Describe the bug**
Trying to authenticate and generate requests using an OAuth2Session using `username/password` authentication. I'm able to generate an initial token to be used in my session, however, I was trying to utilize the functionality to automatically refresh tokens by passing in the `token_endpoint` to automatically as stated in the documentation.
When my token would expire it would not refresh the token automatically and lead to an `InvalidToken` error. After looking into this further I tried to invoke the authorization to refresh and generate a new token by passing in the `grant_type=password` as one of the recognized metadata that gets passed into the client. However noticed that this will not properly resolve the problem as the `ensure_active_token` is used to automatically refresh tokens:
https://github.com/lepture/authlib/blob/a2ada05dae625695f559140675a8d2aebc6b5974/authlib/oauth2/client.py#L256-L269
is neglecting to pass in the rest of the metadata from the session (like `grant_type`) to `refresh_token` here https://github.com/lepture/authlib/blob/a2ada05dae625695f559140675a8d2aebc6b5974/authlib/oauth2/client.py#L222-L254
that only uses passed-in params to construct the body to generate a new token through `prepare_token_request` here:
https://github.com/lepture/authlib/blob/a2ada05dae625695f559140675a8d2aebc6b5974/authlib/oauth2/client.py#L238-L241
**Error Stacks**
```
File
"/Users/mamdouh.abuelatta/.local/share/virtualenvs/optic-dr-ETdtnlpf/lib/python3.8/site-packages/authlib/integrations/requests_client/oauth2_session.py
", line 109, in request
return super(OAuth2Session, self).request(
File
"/Users/mamdouh.abuelatta/.local/share/virtualenvs/optic-dr-ETdtnlpf/lib/python3.8/site-packages/requests/sessions.py
", line 528, in request
prep = self.prepare_request(req)
File
"/Users/mamdouh.abuelatta/.local/share/virtualenvs/optic-dr-ETdtnlpf/lib/python3.8/site-packages/requests/sessions.py
", line 456, in prepare_request
p.prepare(
File
"/Users/mamdouh.abuelatta/.local/share/virtualenvs/optic-dr-ETdtnlpf/lib/python3.8/site-packages/requests/models.py
", line 320, in prepare
self.prepare_auth(auth, url)
File
"/Users/mamdouh.abuelatta/.local/share/virtualenvs/optic-dr-ETdtnlpf/lib/python3.8/site-packages/requests/models.py
", line 556, in prepare_auth
r = auth(self)
File
"/Users/mamdouh.abuelatta/.local/share/virtualenvs/optic-dr-ETdtnlpf/lib/python3.8/site-packages/authlib/integrations/requests_client/oauth2_session.py
", line 24, in __call__
self.ensure_active_token()
File
"/Users/mamdouh.abuelatta/.local/share/virtualenvs/optic-dr-ETdtnlpf/lib/python3.8/site-packages/authlib/integrations/requests_client/oauth2_session.py
", line 21, in ensure_active_token
raise InvalidTokenError()
authlib.integrations.base_client.errors.InvalidTokenError: token_invalid:
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
```
**To Reproduce**
A minimal example to reproduce the behavior:
1. Initialize OAuth2Session with a valid `token_endpoint` and `grant_type=password`
2. `fetch_token` with `username/password`
3. Wait for(or manipulate token expiry) token to be in an expired state
4. Use `client.request` to initiate a request
**Expected behavior**
Ability to pass `grant_type=password` and allow for automatic refresh token when the initial token has expired.
**Environment:**
- OS: macOS Big Sur 11.7.3
- Python Version: 3.8
- Authlib Version: 1.2.0
**Proposed Solution**
**A.** Pass in the rest of the metadata from the session (like `grant_type`) to `refresh_token` here:https://github.com/lepture/authlib/blob/a2ada05dae625695f559140675a8d2aebc6b5974/authlib/oauth2/client.py#L222-L254
that will be passed-down as a params to construct the body to generate a new token through `prepare_token_request` here:
https://github.com/lepture/authlib/blob/a2ada05dae625695f559140675a8d2aebc6b5974/authlib/oauth2/client.py#L238-L241
**B.** Pass in combined params (method/session) when constructing the body to generate a new token through `prepare_token_request` here:
https://github.com/lepture/authlib/blob/a2ada05dae625695f559140675a8d2aebc6b5974/authlib/oauth2/client.py#L238-L241
| 0easy
|
Title: [new] `get_latest_partition_value(fully_qualified_table)`
Body: return the date (or integer) of the latest partition. | 0easy
|
Title: Marketplace - "Select All" doesn't work in Agent Dashboard
Body: [20241219-1429-35.8537904.mp4](https://uploads.linear.app/a47946b5-12cd-4b3d-8822-df04c855879f/a9af1ad6-fa5d-42db-bb10-c77d71db8fce/d46d31aa-0aaa-41da-8c12-8360990fe6c1?signature=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJwYXRoIjoiL2E0Nzk0NmI1LTEyY2QtNGIzZC04ODIyLWRmMDRjODU1ODc5Zi9hOWFmMWFkNi1mYTVkLTQyZGItYmIxMC1jNzdkNzFkYjhmY2UvZDQ2ZDMxYWEtMGFhYS00MWRhLThjMTItODM2MDk5MGZlNmMxIiwiaWF0IjoxNzM0NjE4NjQ3LCJleHAiOjMzMzA1MTc4NjQ3fQ.m8f_3RMzbjf0mzdyXylF87pjKyAa5dRw1acNgGZ0_5k) | 0easy
|
Title: Indent debug output
Body: #### The problem
The `factory.debug` context manager is fantastic, but it's still hard to tell which declaration caused each line of output.
#### Proposed solution
I think the simplest thing that would help is illustrated by the documentation for `factory.debug`, which adds "artificial indentation" to the output. It looks to me like `len(step.chain())` should compute a reasonable nesting depth, and the various calls to `logger.debug` could add whitespace proportional to the nesting depth, or something.
I think it would be even nicer if the log messages could report the class name and field name which is currently being constructed. That looks complicated though, because I don't see that the declarations' `evaluate` methods have any way to get that information currently. | 0easy
|
Title: Ability to set centre and zoom options on basic map
Body: It would be good to be able to set the centre and zoom on maps (cf issue #80). An interface like:
``` python
m = Map(zoom=8, centre=[-5.0, 5.0], centre_on_data=False)
```
would work. The `centre_on_data` argument is useful to stop the default behaviour which recanters the map so it always bounds the data.
| 0easy
|
Title: Gallery examples use Matplotlib in a hybrid way (explicit vs. implicit interfaces).
Body: ### Description:
I would suggest using the "object-oriented (OO) style" (as opposed to pyplot-style) across the gallery.
This issue replicates https://github.com/datacarpentry/image-processing/issues/319, which I opened for the skimage-based lesson by Data Carpentry. | 0easy
|
Title: register_streamfield_block not documented?
Body: Hi guys - new to wagtail-grapple (great product, btw).
I am seeing references to register_streamfield_block in several places in the example, but no definition of exactly what it is, or when it should be used. Seems like it should be documented on the Decorators page.
IF I have nested StructBlocks inside StreamFields, and the top-level is just a StreamField which contains those blocks, do I need to use the decorator on the nested blocks?
| 0easy
|
Title: Migrate setup.py from distutils to setuptools
Body: distutils is going away in a future version of Python
https://docs.python.org/3/whatsnew/3.10.html#distutils-deprecated | 0easy
|
Title: Compute predictions on single sample
Body: Right now there is an error when computing prediction on a single sample, see #156 for details. | 0easy
|
Title: Refactor `alert_view.py`
Body: ## Tell us about the problem you're trying to solve
In the [`alert_view.py`](https://github.com/chaos-genius/chaos_genius/blob/04cfe2ee0e82ff627fa29f8f50ccc08b6ba52bef/chaos_genius/views/alert_view.py) file, database calls and other functionality are directly included in the route functions.
## Describe the solution you'd like
Ideally, these should be present as functions in the [alerts controller](https://github.com/chaos-genius/chaos_genius/blob/main/chaos_genius/controllers/alert_controller.py) which are simply called in the route functions.
For example, this: https://github.com/chaos-genius/chaos_genius/blob/04cfe2ee0e82ff627fa29f8f50ccc08b6ba52bef/chaos_genius/views/alert_view.py#L140-L148
should be separated in a `def enable_alert(alert_id: int)` function.
A good rule of thumb to check if it's abstracted enough is that the view file should not import the DB model directly (in this case, it is the `Alert` model).
## Describe alternatives you've considered
N/A
## Additional context
N/A
---
Please leave a reply or reach out to us on our [community slack](https://github.com/chaos-genius/chaos_genius#octocat-community) if you need any help. | 0easy
|
Title: gevent's subprocess stdout as pipe processing blocks the event loop
Body: * gevent version: 1.3.7
* Python version: 2.7 latest on ubuntu 16.04 and 3.7.1 from homebrew on mac
* Operating System: Written above on x64
### Description:
I am trying to log the stdout / stderr of a subprocess via a greenlet loop (one for each), when there is heavy writing i/o on the child process, gevent stops processing other greenlets on the event loop, and thus they are starved until the i/o pressure stops.
I expected that the other greenlets wont starve...
### What I've run:
```python
from gevent import sleep, spawn
import gevent.subprocess
def line(out):
for line in iter(out.readline, b''):
pass
proc = gevent.subprocess.Popen('yes hi', shell=True, stdout=gevent.subprocess.PIPE)
spawn(line, proc.stdout)
sleep(1)
print('I never get here')
proc.kill()
```
| 0easy
|
Title: `coerce_numbers_to_str` can cause unicode decode errors
Body: ### Initial Checks
- [X] I confirm that I'm using Pydantic V2
### Description
Hi! I just upgraded pydantic version and was testing passing a string like `'hi there!\ud835'` which contains an unpaired unicode character to a model with `coerce_numbers_to_str=True` and saw a unicode error. When I changed it to `False` it went away.
I'm not sure whether this should/shouldn't throw an error, but I think the behavior should be more consistent.
### Example Code
```Python
from pydantic import BaseModel, ConfigDict
class ModelWithCoercion(BaseModel):
x: str
model_config = ConfigDict(coerce_numbers_to_str=True)
class ModelWithoutCoercion(BaseModel):
x: str
# Ok
ModelWithoutCoercion(x='hi there!\ud835')
# Also Ok
ModelWithoutCoercion(x=b'hi there!\ud835')
# Also Ok
ModelWithCoercion(x=b'hi there!\ud835')
# Error
ModelWithCoercion(x='hi there!\ud835')
```
### Python, Pydantic & OS Version
```Text
pydantic version: 2.9.2
pydantic-core version: 2.23.4
pydantic-core build: profile=release pgo=false
install path: /Users/aagam.dalal/Library/Caches/pypoetry/virtualenvs/egp-api-backend-YxsoG3qx-py3.11/lib/python3.11/site-packages/pydantic
python version: 3.11.5 (main, Sep 11 2023, 08:31:25) [Clang 14.0.6 ]
platform: macOS-14.3.1-arm64-arm-64bit
related packages: fastapi-0.115.2 mypy-1.12.0 typing_extensions-4.12.2
commit: unknown
```
| 0easy
|
Title: Wrong power base in LDA Model log_perplexity documentation
Body: #### Problem description
Gensim LDAModel documentation incorrect
#### Steps/code/corpus to reproduce
Based on the code in log_perplexity, it looks like it should be e^(-bound) since all of the functions used in computing it seem to be using the natural logarithm/e
| 0easy
|
Title: Delete button remains on screen after leaving element
Body: Happens on `/chat` and in the subchats if the chat is not active

The code for this is here:
https://github.com/LAION-AI/Open-Assistant/blob/9ad4062ff8e254ad554f2017810039b06c5a5893/website/src/components/Chat/ChatListItem.tsx#L155-L172 | 0easy
|
Title: typing_extensions is missing from dependencies
Body: https://github.com/wemake-services/django-test-migrations/blob/master/pyproject.toml#L55
Related: https://github.com/wemake-services/django-test-migrations/issues/82#issuecomment-632059046
Thanks to @lee-vg for the report | 0easy
|
Title: [community] PyTorch/XLA support
Body: There are pipelines with missing XLA support. We'd like to improve coverage with your help!
[Example 1](https://github.com/huggingface/diffusers/pull/10222/files)
[Example 2](https://github.com/huggingface/diffusers/pull/10109/files)
Please limit changes to a single pipeline in each PR. Changes must be only related to XLA support. Feel free to ping me for review.
| 0easy
|
Title: Captured subproc: AttributeError: 'NoneType' object has no attribute 'pid'
Body: xonsh 0.19.2
In case of running unknown command:
```xsh
$XONSH_SHOW_TRACEBACK = True
!(qwe qwe)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/pc/.local/xonsh-env/lib/python3.12/site-packages/xonsh/main.py", line 273, in _pprint_displayhook
printed_val = pretty(value)
^^^^^^^^^^^^^
File "/Users/pc/.local/xonsh-env/lib/python3.12/site-packages/xonsh/lib/pretty.py", line 129, in pretty
printer.pretty(obj)
File "/Users/pc/.local/xonsh-env/lib/python3.12/site-packages/xonsh/lib/pretty.py", line 399, in pretty
return _default_pprint(obj, self, cycle)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/pc/.local/xonsh-env/lib/python3.12/site-packages/xonsh/lib/pretty.py", line 518, in _default_pprint
_repr_pprint(obj, p, cycle)
File "/Users/pc/.local/xonsh-env/lib/python3.12/site-packages/xonsh/lib/pretty.py", line 727, in _repr_pprint
output = repr(obj)
^^^^^^^^^
File "/Users/pc/.local/xonsh-env/lib/python3.12/site-packages/xonsh/procs/pipelines.py", line 199, in __repr__
s += ",\n ".join(
^^^^^^^^^^^^^
File "/Users/pc/.local/xonsh-env/lib/python3.12/site-packages/xonsh/procs/pipelines.py", line 202, in <genexpr>
if debug or getattr(self, a) is not None
^^^^^^^^^^^^^^^^
File "/Users/pc/.local/xonsh-env/lib/python3.12/site-packages/xonsh/procs/pipelines.py", line 730, in pid
return self.proc.pid
^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'pid'
```
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| 0easy
|
Title: mypy strict mode doesn't play well with less-specific slack_sdk.model imports
Body: Kinda terrible title, dunno how to be super succinct with it. Apologies.
In PyCharm, given the file:
```
class MyBlock(InputBlock):
pass
```
Letting it auto-generate the import (OSX keyboard combo `Option + Enter` when over the invalid token `InputBlock`) will turn the file into:
```
from slack_sdk.models.blocks import InputBlock
class MyBlock(InputBlock):
pass
```
If I then save that file as `testing.py` and run `mypy --strict --show-error-codes testing.py` I'll get:
```
testing.py:1: error: Module "slack_sdk.models.blocks" does not explicitly export attribute "InputBlock"; implicit reexport disabled [attr-defined]
```
because the `__init__` for the `blocks` module doesn't include an `__all__` entry to re-export them as actually ostensibly public.
Ideally, the solution is to provide an `__all__` here, so that anyone using mypy in strict mode doesn't have to "work around" the level of strictness for their project by doing something like the following in their `mypy.ini` (though naturally that solution does exist):
```
[mypy-slack_sdk.models.blocks]
implicit_reexport = True
```
### Category
- [x] **slack_sdk.models** (UI component builders)
| 0easy
|
Title: Add ExtraTreesRegressor
Body: | 0easy
|
Title: Use official MXNet batchify to implement the batchify functions
Body: ## Description
We should use the official mxnet batchify functions to implement our own batchify functions. However, since we'd like to later support other frameworks, we should still keep our own `batchify.py`. We can change it to call MXNet implementations.
MXNet batchify: https://github.com/apache/incubator-mxnet/blob/master/python/mxnet/gluon/data/batchify.py
GluonNLP batchify: https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/batchify.py | 0easy
|
Title: Support pathlib.Path for directory in `App.add_static_route(...)`
Body: It has become trendy to support [`pathlib.Path`](https://docs.python.org/3/library/pathlib.html) objects in addition to, or even instead of, `str` where the argument in question denotes a filesystem path.
We could support and document _directory_ of type [`Path`](https://docs.python.org/3/library/pathlib.html) in [`App.add_static_route(...)`](https://falcon.readthedocs.io/en/stable/api/app.html#falcon.App.add_static_route).
It currently already works implicitly via our invocation of `os.path.normpath` in Python 3.6+, but explodes on 3.5.
We could either fix this for 3.5, or wait for Falcon 4.0 which will presumably drop 3.5 support, and then just document the implicit support, backed by unit tests. | 0easy
|
Title: Create edit profile page on authors module
Body: | 0easy
|
Title: Topic 4. Part 4: Some modules are imported at the beginning, but are not used further in the text
Body: 
It seems that _TfidfTransformer, TfidfVectorizer, LinearSVC_ were forgotten to be removed when preparing the final [article](https://mlcourse.ai/notebooks/blob/master/jupyter_english/topic04_linear_models/topic4_linear_models_part4_good_bad_logit_movie_reviews_XOR.ipynb). If they are left on purpose, it seems that it is worth adding a few words about them in the text.
| 0easy
|
Title: Dependencies installation command for Ubuntu/Linux
Body: There is a wrong name for the Boost apt-get package in the [installation guide for Linux Developers](https://github.com/giotto-ai/giotto-tda#linux):
~`sudo apt-get install cmake boost`~
`sudo apt-get install cmake libboost-dev`
| 0easy
|
Title: Restructure the code for some examples according to PEP8 guidelines?
Body: Hey there! A really neat repository!
I was browsing through the code and noticed in some examples that you've written entire blocks of code within `if __name__ == __main__` block. I am willing to restructure it, it's possible to import the functions you've put within the `if __main__ == __name__:` blocks if they were outside it.
Let me know what you think 😁 | 0easy
|
Title: Number of Downloads metric API
Body: The canonical definition is here: https://chaoss.community/?p=4466 | 0easy
|
Title: Add "me" query
Body: It would be useful to have a "me" query where the client gets the details of the logged-in user (sending a JWT). The client doesn't have to store the ID in that case.
This could be done with something like this I think:
```
def resolve_me(self, info):
user = info.context.user
if user.is_anonymous:
raise GraphQLError('Not logged in!')
```
What do you think?
| 0easy
|
Title: `dummy_minimize` with ask and tell?
Body: Just realized this, does the current API allow one to use random search with the ask and tell API? (in a user-friendly way) | 0easy
|
Title: Export merged environment to yaml
Body: I'm looking for a way to "flatten" the settings into a single `values.yaml` file from the following `settings.yaml`.
Assume that `ENV_FOR_DYNACONF=development`
```
global:
a: 1
default:
b: 2
development:
foo:
bar:
c: 3
production:
foo:
bar:
c: 4
```
Desired `values.yaml`:
```
a: 1
b: 2
foo:
bar:
c: 3
```
Using `print(yaml.dump(settings.as_dict()))` works for the top-level keys but the nested keys returned as *DynaBox* types.
| 0easy
|
Title: [FEA] Docker security scan
Body: **Is your feature request related to a problem? Please describe.**
PyGraphistry should pass security scans for sensitive environments and ideally with an audit trail
**Describe the solution you'd like**
- Use a simple free scanner like `docker scan`
- Run as part of CI
- Run as part of release, including scan output
TBD: should releases fail if a high sev? And if so, a way to override
**Additional context**
Add any other context or screenshots about the feature request here.
| 0easy
|
Title: [DOCS] Consistency in function and method docstrings
Body: ### Issue Description
Function and method parameters are documented using either `Parameters`, `Arguments`, or `Args` keywords. We should only use `Arguments` everywhere.
### Code of Conduct
- [x] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional) | 0easy
|
Title: Improve error message when dotted path fails to load
Body: The `load_dotted_path` raises the following error if unable to load the module:
```pytb
Traceback (most recent call last):
File "/Users/Edu/Desktop/import-error/script.py", line 4, in <module>
load_dotted_path('tests.quality.fn')
File "/Users/Edu/dev/ploomber/src/ploomber/util/dotted_path.py", line 128, in load_dotted_path
module = importlib.import_module(mod)
File "/Users/Edu/miniconda3/envs/ploomber/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 984, in _find_and_load_unlocked
ModuleNotFoundError: An error happened when trying to import dotted path "tests.quality.fn": No module named 'tests.quality'
```
I ran into this issue when trying to import from a local `tests/quality.py` script because IPython registers the `tests` module:
```python
import tests
tests
<module 'tests' from '/Users/Edu/miniconda3/envs/ploomber/lib/python3.9/site-packages/IPython/extensions/tests/__init__.py'>
```
Hence, it shadows my module. It'd be better if we provided extra information to the user. For example, if loading `tests.quality.fn` fails, we could check if we can locate `tests`, and included the path to it as part of the error message, so the error becomes something like this:
```pytb
ModuleNotFoundError: An error happened when trying to import dotted path "tests.quality.fn": No module named 'tests.quality' (loaded 'tests' module from /patht/to/tests/__init__.py )
```
To hint to the user that the problem is that Python is loading the incorrect module, we can use something like this:
```python
# if tests.quality.fn fails, try importing just tests, if it works, show the origin as part of the error message
importlib.util.find_spec('tests').origin
```
Definition is here:
https://github.com/ploomber/ploomber/blob/fd9b4c7a2e787c0206f841928d1be90ac142c7a8/src/ploomber/util/dotted_path.py#L107
Tasks:
- [ ] fix load_dotted_path
- [ ] add tests to https://github.com/ploomber/ploomber/blob/master/tests/util/test_dotted_path.py | 0easy
|
Title: pycharm does not detect proprety from DocumentArray
Body: ```python
from docarray import BaseDocument, DocumentArray
from docarray.typing import TorchTensor
import torch
class Image(BaseDocument):
tensor: TorchTensor[3, 224, 224]
batch = DocumentArray[Image](
[Image(tensor=torch.zeros(3, 224, 224)) for _ in range(10)]
)
batch.tensor # here pycahrm will complain
```
With Document (and pydantic) we don't have this problem

We need to find a way (like in pandas) so that pycharm and other tool does not complain | 0easy
|
Title: Good First Issue [Deep Learning]: Add Support for Different Activation Functions in Multi-Layer Neural Networks
Body: ### Issue Description:
Currently, the `create_and_compile` (ForcasterRNN) function only supports a single activation function for all layers in a multi-layer neural network. To increase flexibility, it would be beneficial to allow users to specify different activation functions for each layer when multiple layers are present. This would enable more fine-grained control over the behavior of the model.
### Feature Request:
Enhance the function to support different activation functions for each layer in the network. This should apply both to the recurrent and dense layers, allowing users to pass a list of activation functions corresponding to each layer.
### Acceptance Criteria:
- [ ] Modify the function to accept a list of activation functions for both recurrent and dense layers.
- If a single activation function is provided, it should be applied to all layers (as in the current behavior).
- If a list is provided, each layer should use the corresponding activation function from the list.
- [ ] Validate that the number of activation functions matches the number of layers in the network.
- [ ] Update the documentation and docstrings to explain the new functionality.
- [ ] Add unit tests to verify the behavior for both single and multiple activation functions.
### Example:
```python
# Example with different activation functions for each layer
model = create_and_compile(
lags=10,
n_series=5,
recurrent_units=[32, 64],
dense_units=[64, 32],
optimizer="adam",
activation=["relu", "tanh"], # List of activation functions
loss="mse",
compile_kwargs={}
)
```
In this example, the first recurrent layer would use `"relu"` and the second recurrent layer would use `"tanh"`. Similarly, for dense layers, different activation functions can be applied as specified by the list.
### Notes:
- Ensure backward compatibility: If a single activation function is passed, it should be applied to all layers (recurrent or dense), preserving the current behavior.
- If a list of activation functions is passed, ensure that it matches the number of layers for proper assignment.
- Provide clear error messages if the list of activation functions does not match the number of layers.
This feature will allow users to customize the behavior of each layer more precisely, opening up possibilities for more advanced model designs. | 0easy
|
Title: Feature: Add map view for visualizing connections
Body: I've seen this requested a couple times in forums, so if someone wants to implement it, it could probably be done with https://plotly.com/python/bubble-maps/ in a new tab with https://dash.plotly.com/dash-core-components/tabs
All the code for the dash is under the function [ui_dash](https://github.com/elesiuta/picosnitch/blob/41af07d770925731d71642ea1ece17b4ca9d39f7/picosnitch.py#L1751) and the connection data is queried from the [sqlite db](https://github.com/elesiuta/picosnitch/blob/41af07d770925731d71642ea1ece17b4ca9d39f7/picosnitch.py#L2060). The location is looked up with [get_geoip](https://github.com/elesiuta/picosnitch/blob/41af07d770925731d71642ea1ece17b4ca9d39f7/picosnitch.py#L1801) for each IP and not stored in the sqlite db.
To help with debugging, you can test it with `DASH_DEBUG=True python3 picosnitch.py start-dash` | 0easy
|
Title: Expose "Scalar" decorator at the root level of tartiflette
Body: Currently, this is not straightforward to register its scalar types, the `Scalar` decorator is located at the `tartiflette.scalar` module.
In term of API, it would be more consistent exposing this decorator at the same level as `Resolver`, `Subscription` and `Directive`.
**Currently**
```python
from tartiflette.scalar import Scalar
@Scalar("DateTime")
class ScalarDateTime:
@staticmethod
def coerce_output(value):
return value.isoformat()
@staticmethod
def coerce_input(value):
return iso8601.parse_date(value)
```
**After**
```python
from tartiflette import Scalar
@Scalar("DateTime")
class ScalarDateTime:
@staticmethod
def coerce_output(value):
return value.isoformat()
@staticmethod
def coerce_input(value):
return iso8601.parse_date(value)
```
| 0easy
|
Title: `RichTextElement.elements` items are never promoted to a proper Python object type
Body: Using the rich text editor widget with the following content:
<img width="477" alt="Screenshot 2024-03-05 at 11 39 10" src="https://github.com/slackapi/python-slack-sdk/assets/118377/0b627c94-4804-4535-8ada-38c4bd91be58">
gets you an unordered list, which upon a `view_submission` event gets posted into the state values as:
<details><summary>JSON representation of state value</summary>
```
{
"type": "rich_text",
"elements": [
{
"type": "rich_text_list",
"style": "bullet",
"indent": 0,
"elements": [
{
"type": "rich_text_section",
"elements": [
{
"type": "text",
"text": "a"
}
]
},
{
"type": "rich_text_section",
"elements": [
{
"type": "text",
"text": "b"
}
]
}
],
"border": 0
}
]
}
```
</details>
which can then be converted into an object like so:
```
>>> obj = Block.parse(example)
>>> type(obj)
slack_sdk.models.blocks.blocks.RichTextBlock
>>> len(obj.elements)
1
>>> type(obj.elements[0])
slack_sdk.models.blocks.block_elements.RichTextListElement
```
which is all well and good... But the child elements of the encountered `RichTextElement` subclass are not converted to the equivalent Python class, and always remain as a dict, effectively looking like the parsing "gave up" at a certain depth (which isn't strictly true):
```
>>> type(obj.elements[0].elements[0])
dict
>>> obj.elements[0].elements[0]
{'type': 'rich_text_section', 'elements': [{'type': 'text', 'text': 'a'}]}
```
As far as I can tell, this is because the `RichTextListElement`, `RichTextPreformattedElement`, `RichTextQuoteElement` and `RichTextSectionElement` never themselves do any parsing, as all of them are defined using:
```python
self.elements = elements
```
instead of something like:
```python
self.elements = BlockElement.parse_all(elements)
```
The latter of which would at least make it so that:
```
>>> type(obj.elements[0].elements[0])
slack_sdk.models.blocks.block_elements.RichTextSectionElement
```
That still leaves some child elements unpromoted, because there's nothing which converts `{'type': 'text', 'text': 'a'}` to a `RichTextElementParts.Text` etc, but that feels like a somewhat separate issue.
### Category
- [x] **slack_sdk.models** (UI component builders)
| 0easy
|
Title: New Libdoc language selection button does not work well on mobile
Body: In Robot Framework 7.2 a new language selection button was introduced into the keyword documentation generated by libdoc.
That new language selection makes the mobile view very uncomfortable, as it is very hard to tap the (important) button for the keyword list next to it.
Please ensure that the size of the button for the keyword list remains big enough.
I also propose to keep the button for the keyword list in the corner of the screen.
Screenshot below
 | 0easy
|
Title: _delete_python_comments fails if it receives non-Python code
Body: https://github.com/ploomber/ploomber/blob/c87c60db954f72309b1e421a5399f9f6a426e5fe/src/ploomber/codediffer.py#L37
the function should not fail and simply return the input code | 0easy
|
Title: Misc documentation cleanup
Body: Documentation improvements and tweaks that are not highest prio, but would be nice to iron out:
- [ ] https://falcon.readthedocs.io/en/latest/community/contributing.html#reviews :arrow_left: checkboxes within use the GitHub flavour of Markdown, which does not render exactly the same in the MyST Markdown flavour that Sphinx uses. Check if there is a suitable plugin, or, in the worst case, just convert `CONTRIBUTING.md` to ordinary bullet lists. | 0easy
|
Title: Add `robot.result.TestSuite.to/from_xml` methods
Body: The current API we have for working with output.xml files programmatically requires using the `ExecutionResult` factory method. It returns a `Result` object that contains a `TestSuite` object in its `suite` attribute. The suite can be inspected and also modified, and the `Result` object has a `save` method for saving results back to XML after possible modifications. This is ok for many usages, but having a more convenient API for working with `TestSuite` objects would be nice.
The result model gets `to/from_json` methods in RF 7.0 (#4847) and we could add `to/from_xml` to `TestSuite` as well. Similarly as the JSON counterparts, `to_xml` should return the XML by default, but also support writing it to a file. `from_xml` should work both with files and XML strings as well. We have all needed functionality for reading and writing XML available, so implementing these methods is easy. There are some design decision to be made, though:
1. Should the `<suite>` structure returned by `to_xml` be wrapped with `<robot></robot>` the same way as in output.xml? I believe it's not a good idea at least by default. The suite anyway wouldn't have anything to write to `<errors>` and although `<statistics>` could be created, it would feel somewhat strange. Returning just the `<suite>` is also consistent with what `to_json` returns. Being able to create a full output.xml with something like `full=True` could be convenient, but it can be added later if needed.
2. Should XML returned by `to_xml` contain the XML prolog (i.e. `<?xml version="1.0" encoding="UTF-8"?>`)? I believe it's not a good idea at least as the default behavior. It's optional in XML and could get in the way. Probably should be included if we supported something like `full=True`.
3. Should `from_xml` support both "full" output.xml files and files containing only `<suite>`? I belive both should be supportred. Not supporting normal output.xml files would be *very* inconvenient and not supporting files generated by `to_xml` would be unacceptable. This is probably easiest to implement so that we in general support output.xml with and without `<robot>`.
4. What to do if `from_xml` is used with an output.xml containing `<errors>`? Options include ignoring them, reporting them or raising an exception. Because `<errors>` doesn't really affect the final execution status, I believe we can ignore them as long as it's mentioned in the documentation. Adding something like ` ignore_errors=True` that can be set to `False` could be considered as well, but probably those who are interested in these errors can just use `ExecutionResult`. | 0easy
|
Title: Is there a way to add tags to a span after it is created?
Body: ### Question
I have additional tags I want to that come from the result of a function running inside a span, but I'm not able to add them via `span.tags += ["my_tag"]` with the current API.
Is there some other way to do this?
Thanks! | 0easy
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.