text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: Drop milliseconds from plot titles
Body: Time with milliseconds is a bit overkill on plot titles, and takes extra space too. I propose these changes to drop milliseconds from plot titles:
`return num2date(times, units, calendar)` by `return num2date(times, units, calendar).replace(microsecond=0)` in functions `generate_radar_time_begin` (l126) and `generate_grid_time_begin` (l137) in `pyart/graph/common.py` | 0easy
|
Title: greeting note from alphavantage raises an error
Body: Reading currencies, alphavantage returns a greeting note ("welcome") and this note raises an error in alphavantage.py line 363.
```
elif "Note" in json_response and self.treat_info_as_error:
raise ValueError(json_response["Note"])
```
For this reason, alphavantage does not work in home assistant.
| 0easy
|
Title: Plotting model with second spectrum then removing it does not work
Body: Something goes wrong with the navigator when adding and then removing a second navigator.
Example:
```python
import numpy as np
import hyperspy.api as hs
data = np.random.random((10, 20, 500))
s = hs.signals.Signal1D(data)
m = s.create_model()
m.plot()
```
- Move the red navigator
- Press the `E` key on your keyboard
- Move the new, blue, navigator
- Press the `E` key to remove the second navigator
The red navigator can no longer be moved, and an error is raised:
```python
matplotlib/axes/_base.py", line 3570, in _validate_converted_limits
raise ValueError("Axis limits cannot be NaN or Inf")
ValueError: Axis limits cannot be NaN or Inf
```
Movie example of this:
[model_plot_bug_second_navigator.webm](https://github.com/hyperspy/hyperspy/assets/1690979/649d311f-870b-48bc-aea0-348a4403b082)
## Python environment:
- HyperSpy version: current `RELEASE_next_patch`
- Python version: 3.11
| 0easy
|
Title: Update or remove outdated files in the repository root directory
Body: ## Classification:
Bug, Enhancement
## Summary
Update the supplementary files in the autokey repository root to the current state.
Currently, several files are out-of-date and unusable.
Update:
- [ ] - `autokey.spec` The openSUSE (?) package build specification is still on version 0.94.1. Update it to be able to build RPMs of the newest version
- [ ] - `extractDoc.py` This was seemingly used to extract the Qt autocompletion specification. This is still old Python 2 code and contains the old source path. Update this to be able to update the autocompletion automatically.
- [ ] - `PKG-INFO` This is still on version 0.90.4 and should either be updated or removed.
- [x] - `TODO` This TODO file isnβt really helpful and can probably be removed. https://github.com/autokey/autokey/commit/3f4bf23f8ad140def989ede70db6b29f74e9499d
- [x] - `INSTALL` is slightly outdated, mentions `python` instead of `python3` https://github.com/autokey/autokey/commit/bfa96d1a339e26bd3ae67787920cbc1a0b9a0601
## Version
AutoKey version: all up to the current git master and develop branch
| 0easy
|
Title: Kagi
Body: Can you add Kagi? | 0easy
|
Title: HTTP Error 429: too many requests causes tests to fail
Body: Occasionally, the test suite can fail when it fetches the data from the same URL too frequently, we should replace the URL link with local versions of the cars.csv if possible. In addition, any remote fetches of the data should catch the HTTP 429 error and add a sleeping timer to ensure that the test doesn't fail when the query rate is too high.
We could also look into dataframe reuse opportunities across tests so that the dataframe doesn't need to be reloaded every single time.

| 0easy
|
Title: fails to find .desktop file to create autostart .desktop (autokey should search `$XDG_DATA_DIRS`)
Body: ## Classification:
Bug
## Reproducibility:
Always
## Version
AutoKey version: 0.95.10
Used GUI (Gtk, Qt, or both): GTK (but shouldn't depend on GUI)
If the problem is known to be present in more than one version, please list all of those.
Installed via: self-packaged (built from source)
Linux Distribution: GNU Guix
## Summary
When autokey(-gtk) looks to create the autostart .desktop file, it looks to find the application desktop file, but only in some set locations (`/usr/share/applications` besides `$XDG_DATA_HOME`). Rather it should look in `$XDG_DATA_DIRS` for application desktop files to be more general. For example, this fails in distros like GNU Guix which do not use `/usr/share/applications`, but do properly set `$XDG_DATA_DIRS` as search paths for things like .desktop files.
## Steps to Reproduce (if applicable)
- Try to enable autostart in the settings dialog
## Expected Results
- The .desktop file should be found and an autostart entry created
## Actual Results
- Instead, this happens. :(
- output:
```
2021-09-07 12:18:17,567 INFO - config-manager - Save autostart settings: AutostartSettings(desktop_file_name='autokey-gtk.desktop', switch_show_configure=False)
2021-09-07 12:18:17,568 ERROR - config-manager - Failed to find a usable .desktop file! Unable to find: autokey-gtk.desktop
Traceback (most recent call last):
File "/gnu/store/khp...-python-autokey-0.95.10/lib/python3.8/site-packages/autokey/configmanager.py", line 209, in _create_autostart_entry
source_desktop_file = get_source_desktop_file(autostart_data.desktop_file_name)
File "/gnu/store/khp...-python-autokey-0.95.10/lib/python3.8/site-packages/autokey/configmanager.py", line 251, in get_source_desktop_file
raise FileNotFoundError("Desktop file for autokey could not be found. Searched paths: {}".format(possible_paths))
FileNotFoundError: Desktop file for autokey could not be found. Searched paths: (PosixPath('/home/podiki/.local/share/applications'), PosixPath('/usr/share/applications'), PosixPath('/gnu/store/khp...-python-autokey-0.95.10/lib/python3.8/config'))
```
But, `autokey-gtk.desktop` is found in `$XDG_DATA_DIRS` (in this case in a profile directory, this is a Guix thing, but autokey should use XDG specs). This can be verified through using launchers like rofi, or with xdg tools.
## Notes
I have not tried to patch this, but the relevant code is at https://github.com/autokey/autokey/blob/4b87b32452cd41d2b98341db220e99cf4f9e34d6/lib/autokey/configmanager.py#L234 Here, autokey should search `$XDG_DATA_DIRS` for the autokey .desktop file. | 0easy
|
Title: [ENH] deduplicate `polars` and `gluonts` check logic in `datatypes` module
Body: Logic related to checking and conversion of `polars` and `gluonts` data types seems to be scattered across the `datatypes` module. This is also highly duplicative.
Optimally, the logic gets moved and deduplicated inside `datatypes._adapter.gluonts` or `polars`.
Contributors should branch off https://github.com/sktime/sktime/pull/7161 if it is not yet merged, to avoid clashes with the ongoing refactor. | 0easy
|
Title: [QOL] Add way to dynamically find out the correct GraphQL endpoint
Body: Possible endpoints are usually paths like
`/graphql`
`/graphiql`
`/api`
We can check for this if the user doesn't know what the endpoint is | 0easy
|
Title: PauliString.after() crashes for Clifford gate PhXZ(a=0.25,x=-1,z=0) in _decompose_into_cliffords
Body: **Description of the issue**
PauliString.after() should be able to return a PauliString with Clifford ops input, but failed for the Clifford gate `PhasedXZGate(axis_phase_exponent=0.25, x_exponent=-1,z_exponent=0)`.
**How to reproduce the issue**
```
>>> a = cirq.NamedQubit('a')
>>> ps = cirq.PauliString() * cirq.Y(a)
>>> phxz = cirq.PhasedXZGate(axis_phase_exponent=0.25, x_exponent=-1,z_exponent=0)
>>> ps.after(phxz.on(a))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/google/home/renyichen/miniconda3/envs/cirq-conda/lib/python3.10/site-packages/cirq/ops/pauli_string.py", line 1008, in after
return self.conjugated_by(protocols.inverse(ops))
File "/usr/local/google/home/renyichen/miniconda3/envs/cirq-conda/lib/python3.10/site-packages/cirq/ops/pauli_string.py", line 987, in conjugated_by
for clifford_op in _decompose_into_cliffords(op)[::-1]:
File "/usr/local/google/home/renyichen/miniconda3/envs/cirq-conda/lib/python3.10/site-packages/cirq/ops/pauli_string.py", line 1606, in _decompose_into_cliffords
return [out for sub_op in decomposed for out in _decompose_into_cliffords(sub_op)]
File "/usr/local/google/home/renyichen/miniconda3/envs/cirq-conda/lib/python3.10/site-packages/cirq/ops/pauli_string.py", line 1606, in <listcomp>
return [out for sub_op in decomposed for out in _decompose_into_cliffords(sub_op)]
File "/usr/local/google/home/renyichen/miniconda3/envs/cirq-conda/lib/python3.10/site-packages/cirq/ops/pauli_string.py", line 1608, in _decompose_into_cliffords
raise TypeError(
TypeError: Operation is not a known Clifford and did not decompose into known Cliffords: (cirq.T**-1).on(cirq.NamedQubit('a'))
```
**Cirq version**
```
1.5.0.dev20250109234340
```
| 0easy
|
Title: [Detections] - `from_inference` should include `'class_name'` key in `Detections.data` even if result is empty
Body: ### Bug
[`from_inference`](https://github.com/roboflow/supervision/blob/0ccb0b85adee4202f5fe96834a374a057bbbd9da/supervision/detection/core.py#L448) should include `'class_name'` key in `Detections.data` even if `roboflow` inference result is empty.
### Environment
- Latest version of supervision
- macOS
### Minimal Reproducible Example
```python
import cv2
import roboflow
import numpy as np
import supervision as sv
from tempfile import NamedTemporaryFile
roboflow.login()
rf = roboflow.Roboflow()
project = rf.workspace().project("people-detection-general")
model = project.version(5).model
x = np.zeros((1000, 1000, 3), dtype=np.uint8)
with NamedTemporaryFile(suffix=".jpeg") as f:
cv2.imwrite(f.name, x)
result = model.predict(f.name).json()
detections = sv.Detections.from_inference(result)
print(detections)
```
Here is the result:
```
Detections(xyxy=array([], shape=(0, 4), dtype=float32), mask=None, confidence=array([], dtype=float32), class_id=array([], dtype=int64), tracker_id=None, data={})
```
Note `data={}` and it should be `data={'class_name': []}`.
### Additional
- Note: Please share a Google Colab with minimal code to test the new feature. We know it's additional work, but it will speed up the review process. The reviewer must test each change. Setting up a local environment to do this is time-consuming. Please ensure that Google Colab can be accessed without any issues (make it public). Thank you! ππ» | 0easy
|
Title: Remove deprecated support from scrapy.utils.log.logformatter_adapter()
Body: This helper was added in 1.0.0 (I think), it looks like the parts of it that don't show warnings are still useful, though I haven't looked closely at them. | 0easy
|
Title: Ask to run code after improvement
Body: Unlike after code creation, after code improvement(i.e., the `-i` flag) gtp-engineer doesn't ask if the code should be run.
## Feature description
When using the `-i` flag and gpt-engineer has finished improving code, the use should be asked if they want to run the code.
## Motivation/Application
In my view, a good workflow for gpt-engineer would be:
1. I enter a prompt, gpte asks clarifying questions, I answer, gpte creates a codebase
2. With my consent, gpte runs the code
3. I get asked if the code worked well. If not, I'm asked if I want to improve.
4. I enter an improvement prompt
5. gpte improves code
6. **With my consent, gpte runs the code** <- this issue
7. Repeat | 0easy
|
Title: [New feature] Add apply_to_images to GlassBlur
Body: | 0easy
|
Title: Improve feedback when running `ploomber task`
Body: User executes this:
```sh
ploomber task some-task
```
Print something like this:
```
'some-task' executed successfully!
Products:
products/report.html
products/data.csv
``` | 0easy
|
Title: Weird stacking auto-complete
Body: It happens a lot and is pretty annoying, anyone get this/know a fix?
https://user-images.githubusercontent.com/34655998/153747557-fc8d07f3-78e4-4606-b8f6-415c26a5503d.mp4
Here are my custom environment variables
```
$DOTGLOB = True
$COMPLETIONS_CONFIRM = False
$COMPLETION_IN_THREAD = True
$COMPLETIONS_BRACKETS = False
$COMPLETIONS_DISPLAY = "single"
$CASE_SENSITIVE_COMPLETIONS = False
$AUTO_SUGGEST_IN_COMPLETIONS = True
$UPDATE_COMPLETIONS_ON_KEYPRESS = True
$COMMANDS_CACHE_SAVE_INTERMEDIATE = True
$XONSH_HISTORY_BACKEND = "sqlite"
$XONSH_HISTORY_FILE = "/home/ganer/.xonsh_history"
$XONSH_CACHE_EVERYTHING = True
$XONSH_CTRL_BKSP_DELETION = True
$UPDATE_PROMPT_ON_KEYPRESS = True
$XONSH_COLOR_STYLE = 'default'
$XONSH_STYLE_OVERRIDES['Token.Literal.Number'] = '#1111ff'
```
edit: also on a another note, typing backshashes before most characters cause a error to appear in the console | 0easy
|
Title: deprecate and remove the --boxed option
Body: the related code has been moved to the pytest-forked plugin and wont ever support coverage or windows sanely | 0easy
|
Title: Test is not failed if listener sets keyword status to fail and leaves message empty
Body: RF 7.1 enhanced the listener interface so that listeners can affect the execution status (#5090). For example, the following listener ought to be able to fail a keyword if "something bad happened" so that the execution stops and the test using the keyword fails:
```python
def end_keyword(data: running.Keyword, result: result.Keyword):
if result.status == 'PASS' and something_bad_happened():
result.status = 'FAIL'
```
Unfortunately exactly the above doesn't work. The keyword fails and remaining keywords are not executed, but the test is nevertheless passed. The bug isn't actually in the code related to listeners, but in the code validation the test status where an empty message is considered to mean that there was no failure. The problem thus doesn't occur if the listener also sets the message. This example causes the test to fail as expected:
```python
def end_keyword(data: running.Keyword, result: result.Keyword):
if result.status == 'PASS' and something_bad_happened():
result.status = 'FAIL'
result.message = 'Something bad happened'
```
Listeners should in general provide a meaningful error message when they fail a keyword or change the status otherwise, but this is nevertheless a pretty severe bug as it can cause false positives. | 0easy
|
Title: provide a way to use deep neural networks
Body: ### Description
Igel is built on top of sklearn at the moment. Therefore, All sklearn models can be used. This includes of course the neural network models integrated in sklearn (MLP classifier and MLP regressor). However, sklearn is not powerful enough when it comes to deep neural networks.
Therefore, this issue aims to include support for using deep neural networks in igel. Maybe Keras API?
### Example:
```
model:
type: classification # or regression
algorithm: neural network # this is already implemented. However, it is using the sklearn NN implementation
arguments: default # this will use the default argument of the NN class
```
As you can see. The user can provide these configs in the yaml file and igel will train a neural network. However, the NN model in sklearn is not as powerful as other frameworks like keras, tensorflow, torch etc..
What I mean with this issue and want to implement in the future is **maybe** something like this (feel free to bring new ideas):
```
model:
deep: true
type: classification # or regression
algorithm: neural network # this will now use the Keras NN model since the user added deep: true
arguments: default
```
OR Maybe even this can be implemented as a VISION ( This will probably take a long time to implement):
```
model:
uses: keras # the user can here provide keras, tensorflow or torch
type: classification # or regression
algorithm: neural network # this will now use the Keras NN model since the user provided that he wants to use keras
arguments: default
```
| 0easy
|
Title: [ENH]: Groupby Top K
Body: # Brief Description
<!-- Please provide a brief description of what you'd like to propose. -->
I would like to propose a function that allows users to take in a dataframe and return another dataframe that has the top "k" values after grouping by another column.
This has come up multiple times at work, when I have to pick out "top protein mutants in each cluster".
Posting this here in case anybody wants to take a formal stab at it, otherwise I'll get to it later.
# Example API
```python
def groupby_topk(d: pd.DataFrame, groupby_column_name: Hashable, sort_column_name: Hashable, k: int, sort_values_kwargs: Dict) -> pd.DataFrame:
"""
Return a dataframe that has the top "k" values per group.
Some docstrings to flesh out:
- Groups: defined by the "groupby" column.
Taken inspiration from https://stackoverflow.com/a/27844045/1274908
"""
return d.groupby(groupby_column_name).apply(lambda df: df.sort_values(sort_column_name, **sort_values_kwargs).head(k))
```
Give it a thumbs up if you're interested in seeing this too! | 0easy
|
Title: Add KDE functionality to hist and hist2d plots
Body: I'd like to add KDE (kernel density estimation) functionality for the 1D and 2D histogram plotting functions, `hist`, `hist2d`, and maybe `hexbin`. Users can then optionally add marginal distribution panels with `panel_axes`.
Currently, the only matplotlib plotting function supporting KDE estimation is `violinplot`, but the result is often gross -- the "violins" do not smoothly taper to zero-width tails like in seaborn. Instead they abruptly cut off at the distribution minimum/maximum. So, we shouldn't try to use the existing KDE engine -- we should implement a new KDE estimation engine, similar to seaborn, and use it to power `hist`, `hist2d`, and `violinplot`. This may involve writing a new `violinplot` from scratch. | 0easy
|
Title: Convert incremental coverage scripts to use json coverage report
Body: **Description of the issue**
The current coverage checking script [check/pytest-and-incremental-coverage](https://github.com/quantumlib/Cirq/blob/9fbaa05433237fbf427de4dd1e70adfbd908d618/check/pytest-and-incremental-coverage) relies on the `coverage annotate` command which is deprecated per https://github.com/nedbat/coveragepy/commit/59b07a12de95de776fa2f44a2101f63a156fee75. The `coverage` tool has `json` command for producing JSON report, which should provide equivalent data as `coverage annotate`.
**Proposal**
- [ ] update scripts [check/pytest-and-incremental-coverage](https://github.com/quantumlib/Cirq/blob/9fbaa05433237fbf427de4dd1e70adfbd908d618/check/pytest-and-incremental-coverage) and [check/pytest-changed-files-and-incremental-coverage](https://github.com/quantumlib/Cirq/blob/9fbaa05433237fbf427de4dd1e70adfbd908d618/check/pytest-changed-files-and-incremental-coverage) to use `coverage json` instead of `coverage annotate`
- [ ] prune any orphaned code for processing `coverage annotate` output, for example in [dev_tools/check_incremental_coverage_annotations.py](https://github.com/quantumlib/Cirq/blob/9fbaa05433237fbf427de4dd1e70adfbd908d618/dev_tools/check_incremental_coverage_annotations.py)
- [ ] move settings from [dev_tools/conf/.coveragerc](https://github.com/quantumlib/Cirq/blob/9fbaa05433237fbf427de4dd1e70adfbd908d618/dev_tools/conf/.coveragerc) to pyproject.toml so developers can run `check/pytest` without having to use the `--cov-config=.../to/.coveragerc` option.
**Benefits**
- CI scripts will be future-proofed for the removal of `coverage annotate`
**Cirq version**
1.3.0.dev at b28bfce0b91437cc151c0a6b6f0fc9f0f8fe5942
| 0easy
|
Title: Python version in setup.py is outdated
Body: **Describe the bug**
#140 suggests that the projects supports Python 3.8.1, however the Python version specifier is `~3.6` which is equivalent of `>=3.6.0 <3.7.0`.
**To Reproduce**
When trying to install the package with poetry (`poetry add syrupy`) I get:
```
[...]
The current project's Python requirement (>=3.6) is not compatible with some of the required packages Python requirement:
- syrupy requires Python ~=3.6
Because no versions of syrupy match >0.4.2,<0.5.0
and syrupy (0.4.2) requires Python ~=3.6, syrupy is forbidden.
[...]
```
As shown in the error message in `pyproject.toml` I have `python = ">=3.6"`.
**Expected behavior**
The package should install since `syrupy` supports all Python versions over 3.6
| 0easy
|
Title: [DOC] How does one initialise a network without `from_dataset`?
Body: ### Expected behavior
Code:
```
from pytorch_forecasting.models import NBeats, BaseModel
from pytorch_lightning import Trainer, LightningModule
model = NBeats()
trainer.fit(model, train, valid)
```
Result:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[<ipython-input-102-a9a3146dfc98>](https://localhost:8080/#) in <cell line: 1>()
----> 1 trainer.fit(model, train,valid)
1 frames
[/usr/local/lib/python3.10/dist-packages/pytorch_lightning/utilities/compile.py](https://localhost:8080/#) in _maybe_unwrap_optimized(model)
130 return model
131 _check_mixed_imports(model)
--> 132 raise TypeError(
133 f"`model` must be a `LightningModule` or `torch._dynamo.OptimizedModule`, got `{type(model).__qualname__}`"
134 )
TypeError: `model` must be a `LightningModule` or `torch._dynamo.OptimizedModule`, got `NBeats`
```
I'm trying to use the naked networks without the rest of the stuff around pytorch_forecasting.
I've read the source code I do believe this should work; but I must be doing something stupid.
Is it possible to add an example or FAQ of how to use pytorch_forecasting without `from_dataset`? | 0easy
|
Title: Accept JSON type
Body: ### Checklist
- [X] There are no similar issues or pull requests for this yet.
### Is your feature related to a problem? Please describe.
I want to be able to show JSON column in detail page.
Could not find field converter for column *questions* (<class 'sqlalchemy.dialects.postgresql.json.JSON'>).
### Describe the solution you would like.
_No response_
### Describe alternatives you considered
_No response_
### Additional context
_No response_ | 0easy
|
Title: Documentation issue for pn.pane.Image and others
Body: I noticed that the reference for pn.pane.Image, pn.pane.PNG, and pn.pane.JPG (and I suspect some of the other image related panes) says to use `style` to denote CSS style, when it's `styles` as the actual parameter keyword that should be used.
Sorry if this is not the best place to put this! I'm never sure where documentation errors should be left. | 0easy
|
Title: Add animation to thumbsup/down in chat when user not voted yet
Body: maybe something like this is ok
https://www.vecteezy.com/video/10877395-like-loop-motion-graphics-video-transparent-background-with-alpha-channel | 0easy
|
Title: Edge case: `xonsh --no-rc --no-env`: `TERM environment variable not set`
Body: ```xsh
xonsh --no-rc --no-env
clear
# TERM environment variable not set
```
We need to have default TERM in `--no-env` if it's needed or close this issue with clarification why we don't.
## For community
β¬οΈ **Please click the π reaction instead of leaving a `+1` or π comment**
| 0easy
|
Title: Forbid path-like child names
Body: It would also be nice to add a check like this for child names.
_Originally posted by @shoyer in https://github.com/pydata/xarray/pull/9378#pullrequestreview-2301368526_
| 0easy
|
Title: How to display percentage number on top of arc aka. the pie chart?
Body: now, there isn't such function yet? also, is there any way to annotate numbers on any chart generated?
so that pygwalker at current stage is for prototyping? | 0easy
|
Title: Presets for ProfilerOptions/Profiler
Body: **Is your feature request related to a problem? Please describe.**
The Profiler can be slow with all options enabled. Most users won't need all of the settings to be enabled. Changing the settings one by one takes quite a bit of time and you need to study the docs quite a bit.
**Describe the outcome you'd like:**
It would be nice if ProfilerOptions or the Profiler itself had the possibility of quickly adjusting multiple settings by setting a preset. The current default behavior could have the option `"all"` or `"complete"`. It might look like this:
`profiler = dp.Profiler(data, preset="complete")`
or
`opts = dp.ProfilerOptions(preset="complete")`
Some other presets could be:
- `"standard"` - where some niche features that are performance intensive are deactivated
- `"numeric_stats_disabled"` - self explainatory
- `"data_types"` - where only data types of columns are infered (this is actually what I want right now)
I'm sure you can think of others. The down side of this feature is that many presets would be opinionated.
The implementation would be straight forward and the source code e.g. `profiler_presets.py` would nicely list all the settings that are changed by the preset.
| 0easy
|
Title: Update `dbapi` method to `import_dbapi` for `APSWGSheetsDialect` to remove warning at the start of program
Body: **Is your feature request related to a problem? Please describe.**
For newer versions of SQLAlchemy (2.0), usage of `dbapi()` class method is deprecated. It is recommended to implement `import_dbapi()` class method. [Docs](https://docs.sqlalchemy.org/en/20/core/internals.html#sqlalchemy.engine.Dialect.import_dbapi)
So, the following warning is shown every time I use SQLAlchemy with Google Sheets adapter:
```
SADeprecationWarning: The dbapi() class method on dialect classes has been renamed to import_dbapi(). Implement an import_dbapi() class method directly on class <class 'shillelagh.backends.apsw.dialects.gsheets.APSWGSheetsDialect'> to remove this warning; the old .dbapi() class method may be maintained for backward compatibility.
```
**Describe the solution you'd like**
I would like to update the base class `APSWDialect`. Remove (if it does not break anything, I am not sure how it really works rn) or extend the class with the class method `import_dbapi`.
**Describe alternatives you've considered**
Leaving as it is for now. But as it is deprecated, better follow the docs.
**Additional context**
Python 3.11
SQLAlchemy==2.0.20
importlib-metadata==6.8.0 (without it SQLAlchemy, ORM to be exact, does not work)
| 0easy
|
Title: Tox4: Django test suite database can not be recreated in Tox 4 (works in Tox 3)
Body: ## Issue
I want to update the test suite of a library from tox 3 to tox 4. I am running tox in parallel mode in Github actions.
Part of the test suite are Django tests that create a test Postgres database (that is supplied by a GitHub actions service docker container).
Somehow in tox 4 the creation of the test database fails. It fails only in Python 3.8 and above (3.7 and below still works)
Here is our working test suite with latest tox 3:
https://github.com/getsentry/sentry-python/actions/runs/3752739310/jobs/6375243340
And here is the failing tst suite with tox 4.0.16:
https://github.com/getsentry/sentry-python/actions/runs/3752701362/jobs/6375159457
I only changed the tox version, the rest is the same in the both examples.
It seems that the Django test suite tries to create the test db although it is already existing (Normally the db is recreated if it is existing)
I have tried to debug this and find out what the originating problem is, but without success.
As tox 4 is a complete rewrite it could be anything.
Has anyone any hints what this could be?
Is the way tox runs parallel tests totally different between tox 3 and 4?
Any help is greatly appreciated!
| 0easy
|
Title: Make text monospace
Body: - [x] Hexapod robot dimensions label
- [x] Hexapod Robot dimensions input
- [x] Kinematics control input | 0easy
|
Title: [Docs] Remove redundant CI when docs merged into main
Body: ### Checklist
- [x] 1. If the issue you raised is not a feature but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [x] 2. Please use English, otherwise it will be closed.
### Motivation
<img width="640" alt="Image" src="https://github.com/user-attachments/assets/4e7bc601-683f-494f-8d47-c3c80f71e309" />
The execute notebook CI is to test the correctness of PRs. If one CI is merged into main, it should not be triggered, but just use execute-and-deploy. We should fix our CI workflow.
### Related resources
_No response_ | 0easy
|
Title: Support string `SELF` (case-insenstive) when library registers itself as listener
Body: Currently if a library registers itself to be a listener, it needs to do it like this:
```python
class Library:
def __init__(self):
self.ROBOT_LIBRARY_LISTENER = self
def start_test(self, data, result):
...
```
This is somewhat annoying and also makes it impossible to configure the listener using the `@library` decorator. We should allow using string `SELF` in this case. It would allow changing the above to this:
```python
class Library:
ROBOT_LIBRARY_LISTENER = 'SELF'
def start_test(self, data, result):
...
```
It would also support even more convenient `@library` usage that would automatically make sure listener methods are not registered as keywords:
```python
from robot.api.deco import keyword, library
@library(listener='self')
class Library:
def start_test(self, data, result):
...
@keyword
def example(self):
...
```
This is trivial to implement and fits well with other listener enhancements #3296 and #4910 done in RF 7.0.
| 0easy
|
Title: [Refactor] Informative error messages on Scenario build failure
Body: ### Description
The error message `raise InvalidScenario(scenario.id)` when scenario building fails is not very informative, because it leads to something like:
```
taipy.core.exceptions.exceptions.InvalidScenario: SCENARIO_MS_analysis_afc4e19b-be39-4651-bc9d-11f7807fd3af
```
with Traceback pointing only at the Scenario Factory.
I suggest adding a warning to
https://github.com/Avaiga/taipy/blob/9cc59f2fbd5ed3d261905923a9cb85dcd623e3f3/taipy/core/scenario/scenario.py#L608-L621
like:
```python
import warnings
warnings.warn( f"Graph directed: {nx.is_directed( dag )}, Cycle found at: {nx.cycles.find_cycle( dag )}" )
```
and
```python
if not (isinstance(left_node, DataNode) and isinstance(right_node, Task)) and
not (isinstance(left_node, Task) and isinstance(right_node, DataNode) ):
warnings.warn(f"Edge between left node:{left_node.get_label()} and right node:{right_node.get_label()} does not connect {Task} and {DataNode}")
return False
```
### Acceptance Criteria
- [ ] Any new code is covered by a unit tested.
- [ ] Check code coverage is at least 90%.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [X] I am willing to work on this issue (optional) | 0easy
|
Title: [TRACKER] More useful error messages
Body: ### NOTE:
For users encountering poor UX related to Tracecat errors, please kindly add your error message to this issue. We aim to have a <24 hour turnover for every insufficiently useful error message.
## Problem
We designed the parser from first principles that fits according to parser / complier / arithmetic theory. Therefore existing error messages e.g.
```
Failed to parse expression: Unexpected end-of-input. Expected one of: \n\t* IF\n\t* OPERATOR\n\t* RPAR\n\t* COMMA\n'
```
areoften cryptic and do no provide sufficient next steps / information to the user i.e. good for SWEs / development / unit tests but not great for the end user.
## Solution
- Do an audit of all error messages. Make sure we reraise exceptions with a user friendly explanation of the error.
- Make sure every exception comes with concrete next-steps on how to resolve the issue.
- Next steps must provide concrete examples, links to docs, references to the UI | 0easy
|
Title: k8s file-mounted secrets
Body: In K8S, it is common to mount secrets to a file such as tmpfs.
The typical format puts each secret into a separate file inside a mounted volume, where the key is the filename and the content is the value.
Is there a way to achieve loading this in Dynaconf?
If not, what is the best way to add this functionality (using hooks or plugin, or contribute a PR?).
| 0easy
|
Title: custom validation set
Body: Great lib !
Greater with a possibility to input a custom validation set. In my case, I have correlation between the train points so I cannot use a cross val, neither a split val with or without shuffle. I need to input a dedicated validation "far" in time from the train data points. | 0easy
|
Title: [Feature request] Add apply_to_images to ImageCompression
Body: | 0easy
|
Title: Extra sliders on the GT management pages
Body: ### Actions before raising this issue
- [x] I searched the existing issues and did not find anything similar.
- [x] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
[random_200.zip](https://github.com/user-attachments/files/19027349/random_200.zip)
```python
from pathlib import Path
from cvat_sdk import make_client, models
with make_client("http://localhost", port=8080, credentials=("user", "password")) as client:
data_dir = Path("~/Downloads/random_200").expanduser()
task = client.tasks.create_from_data(
spec=models.TaskWriteRequest(
name="task with gt - images with frame filter",
labels=[{"name": "cat"}, {"name": "dog"}],
segment_size=10,
),
resources=[data_dir / f"image_{i}.png" for i in range(200)],
data_params=dict(
image_quality=90,
sorting_method="natural",
chunk_size=5,
start_frame=3,
stop_frame=197,
frame_step=2,
validation_params={
"mode": "gt",
"frame_selection_method": "random_uniform",
"frame_count": 20,
},
use_cache=True,
),
)
```
1. Create a task using the script and data above
2. Go to the Quality control page
3. Find extra sliders on the management (horizontal) and on the settings (vertical) tabs


### Expected Behavior
No extra sliders are on the page.
### Possible Solution
_No response_
### Context
_No response_
### Environment
```Markdown
``` | 0easy
|
Title: Changing color theme
Body: ### Checklist
- [X] There are no similar issues or pull requests for this yet.
### Is your feature related to a problem? Please describe.
This feature request is not related to a problem.
### Describe the solution you would like.
I would like to see some documentation on how the color theme can be changed.
### Describe alternatives you considered
I've tried searching for how other repos have done this and so far it appears like this is a custom implementation.
### Additional context
_No response_ | 0easy
|
Title: Add alpha support for scatter plots
Body: **Describe the solution you'd like**
When I'm plotting scatter plots (such as PCA), oftentimes there is a bunch of data. By using the ``alpha`` parameter on matplotlib (also available through the pandas plotting interface), it allows you to discern the data and amount of overlapping.
**Is your feature request related to a problem? Please describe.**
In general, I think an ``alpha`` parameter to any scatter plot visualizations (``PCADecomposition``) would be awesome. It would also be inline with what pandas plotting provides.
**Examples**
Seaborn scatter plots come with a slight alpha applied. But you can use ``scatter_kws={'alpha':0.3}`` to set it. In pandas or matplotlib you just pass ``alpha=0.3``.
Jake's book also has alpha turned on when plotting scatter plots. See https://jakevdp.github.io/PythonDataScienceHandbook/05.09-principal-component-analysis.html
| 0easy
|
Title: Broken documentation links on Gensim's PyPI page
Body: Links in the `Documentation` section of our official PyPI page are broken (return 404):
https://pypi.org/project/gensim/#documentation-1
OTOH, links at https://github.com/RaRe-Technologies/gensim#documentation work, so it looks like the PyPI version contains obsolete links that got out of sync.
| 0easy
|
Title: Create tests for try_orelse_with_no_variables_to_save.py and try_orelse_with_no_variables_to_save_and_no_args.py
Body: In this commit https://github.com/python-security/pyt/pull/59/commits/23e6412a20cec20b66508950440bb293279b8265 you can see where this would affect the program. (Search for "# raise")
The test would be just like `test_orelse` in `cfg_test.py`. | 0easy
|
Title: "as_categorical"
Body: Another idea!
```python
@pf.register_dataframe_method
def as_categorical(df, column_name, categories=None, ordered=True):
"""
Convert one column in the dataframe to a categorical one.
"""
if categories is None:
categories = sorted(df[column_name].unique())
df[column_name] = pd.Categorical(
df[column_name], categories=categories, ordered=ordered,
)
return df
```
It replaces repeatedly doing this piece:
```python
df[column_name] = pd.Categorical(
df[column_name], categories=categories, ordered=ordered,
)
```
Leaving this here for now. | 0easy
|
Title: [misc] Adapt to django-debug-toolbar breaking change
Body: The django-debug-toolbar [`4.4.0` release](https://django-debug-toolbar.readthedocs.io/en/latest/changes.html#id3) added a check that disallows running tests with django_toolbar installed, which broke our CI ([example](https://github.com/dynaconf/dynaconf/actions/runs/9269034760/job/25499495620#step:6:440)).
As a workaround, its pinned to `~=4.3.0`, but we might want to adapt the code to get its upgrades. | 0easy
|
Title: Document deployment
Body: uwsgi -s /tmp/uwsgi.sock -w quokka:app
gunicorn
heroku
etc
| 0easy
|
Title: DOC: Add note about installing requirements for `10 minutes to xorbits.numpy`
Body: Doc of [10 minutes to xorbits.numpy](https://doc.xorbits.io/en/latest/getting_started/numpy.html) requires for some third-party libraries like `hdf5`, maybe we need add a note to tell users how to install them. | 0easy
|
Title: In all modes the `eval_metric` should be set as in AutoML constructor (not tuned)
Body: `eval_metric` should be used for early stopping for all modes and all algorithms (if early stopping applicable) | 0easy
|
Title: Add a changelog file
Body: ## Description
Create a changelog file. More info at: https://keepachangelog.com/en/1.0.0/ | 0easy
|
Title: [BUG] AttributeError: 'series' object has no attribute 'array' when calling np.array(df)
Body: ### Describe the bug
When calling `np.array(df)` in my code, I encountered the following error: "`AttributeError: 'series' object has no attribute '__array__'`".
### To Reproduce
To help us to reproduce this bug, please provide information below:
1. Your Python version: python 3.8
2. The version of Xorbits you use: the latest, 0.2.0
3. Versions of crucial packages:
numpy==1.22.4
5. Minimized code to reproduce the error.
```
import numpy as np
import xorbits.pandas as pd
# code that triggers the error
data = np.array(df)
```
### Expected behavior
I expected the `np.array()` function to create a numpy array from the Pandas DataFrame `df`.
### Additional context
N/A
| 0easy
|
Title: Fix and re-enable `unnecessary-comprehension` and `use-dict-literal` pylint tags
Body: Both are valid simplification hints. | 0easy
|
Title: [ENH] Friendlier error when neo4j creds not provided
Body: Community user usability report: If Neo4j creds or a driver are not provided, the error currently looks like:
```
/path/to/conda_env/lib/python3.8/site-packages/graphistry/plotter.py in cypher(self, query, params)
1504 res = copy.copy(self)
1505 driver = self._bolt_driver or PyGraphistry._config['bolt_driver']
-> 1506 with driver.session() as session:
1507 bolt_statement = session.run(query, **params)
1508 graph = bolt_statement.graph()
AttributeError: 'NoneType' object has no attribute 'session'
```
It'd be more friendly to do something like:
```
if driver is None:
raise ValueError('No bolt driver configured. Use register(bolt=...) to specify credentials or a driver.')
``` | 0easy
|
Title: Work Pool and Work Queue update events
Body: ### Describe the current behavior
The current behavior of work pool and work queue updates are if work pool/work queue status is updated then only creates a Event record in database. If Work Pool/Work Queue any other fields are updated then there is no event is created in database.
### Describe the proposed behavior
Whenever work pool/work queue fields are updated then it should create a event in database so we can create some automations around work pool/work queue updates.
### Example Use
One of example use case is that whenever work pool concurrency limit is updated OR work queue priority changes then we can write automations to handle subsequent actions to either notify on slack or scale resources accordingly.
### Additional context
_No response_ | 0easy
|
Title: Deprecate `xr.cftime_range` in favor of `xr.date_range`
Body: ### What is your issue?
`xr.date_range` automatically chooses a return type based on calendar and can take `use_cftime=True` to always return CFTimeIndex, so I don't see why we need `xr.cftime_range` any more | 0easy
|
Title: Tell mypy et.al that Unset is always falsy
Body: **Is your feature request related to a problem? Please describe.**
mypy doesn't understand that `Unset` is always falsy
```python
if api_object.neighbours:
for x in api_object.neighbours: # <-- mypy: error: Item "Unset" of "Union[Unset, None, List[Neighbour]]" has no attribute "__iter__" (not iterable) [union-attr]
do_something_with(x)
```
**Describe the solution you'd like**
I would like mypy to understand this automatically
Type annotation on `Unset.__bool__()` could be `Literal[False]` instead of `bool`, but [typing.Literal](https://docs.python.org/3/library/typing.html#typing.Literal) is not available for python < 3.8. | 0easy
|
Title: Remove `APIKeyMissingError`
Body: Remove `APIKeyMissingError`. We are not using it anymore, we are using `MissingMandatoryKeyError` instead.
We only need to delete [it](https://github.com/scanapi/scanapi/blob/master/scanapi/errors.py#L15-L20)
```
class APIKeyMissingError(MalformedSpecError):
"""Raised when 'api' key is not specified at root scope in the API spec"""
def __init__(self, *args):
message = "Missing api 'key' at root scope in the API spec"
super(APIKeyMissingError, self).__init__(message, *args)
``` | 0easy
|
Title: Incorrect example in BooleanWidget docstring
Body: The docstring in BooleanWidget in
https://github.com/django-import-export/django-import-export/blob/e855fcf8ed43af4dd681435f516e9f50ee19613d/import_export/widgets.py#L119
shows example usage with missing brackets (the `widget` kwarg must be instance not a class) | 0easy
|
Title: Tox not stripping ticks from paths in commands anymore
Body: ## Issue
I have a tox test env which calls pytest like this:
```
commands =
python -Wdefault -b -m pytest tests/unit \
--in_memory=False \
--basetemp="tests/tmp/{envname}" \
--junitxml="tests/tmp/results/junit-out_memory-{envname}.xml" \
--junit-prefix="foo-{envname}"
```
As you can see the paths are wrapped in ticks.
When running the tox env via tox<4.0.0, apparently the ticks get stripped before passing it to the pytest subprocess so there is NO problem.
Here a print of the arguments inside `.tox/py39/Lib/site-packages/_pytest/config/__init__.py:306`:
`ARGS: ['tests/unit', '--in_memory=False', '--basetemp=tests/tmp/py39', '--junitxml=tests/tmp/results/junit-out_memory-py39.xml', '--junit-prefix=DLH5-py-py39']
`
In tox 4.0.2 however the ticks are not stripped anymore, which results in following arguments passed to pytest:
`ARGS: ['tests/unit', '--in_memory=False', '--basetemp="tests/tmp/py39"', '--junitxml="tests/tmp/results/junit-out_memory-py39.xml"', '--junit-prefix="DLH5-py-py39"']
`
Then pytest fails with following stderr:
```
py39: install_deps> python -I -m pip install junitparser pytest
.pkg: _optional_hooks> python C:\Python39\lib\site-packages\pyproject_api\_backend.py True setuptools.build_meta
.pkg: get_requires_for_build_editable> python C:\Python39\lib\site-packages\pyproject_api\_backend.py True setuptools.build_meta
.pkg: get_requires_for_build_sdist> python C:\Python39\lib\site-packages\pyproject_api\_backend.py True setuptools.build_meta
.pkg: build_wheel> python C:\Python39\lib\site-packages\pyproject_api\_backend.py True setuptools.build_meta
.pkg: build_sdist> python C:\Python39\lib\site-packages\pyproject_api\_backend.py True setuptools.build_meta
py39: install_package_deps> python -I -m pip install h5py hdf5plugin==2.3.2 numpy pandas psutil scipy wavewatson==1.1.*
py39: install_package> python -I -m pip install --force-reinstall --no-deps C:\git\dlh5-py\.tox\.tmp\package\29\dlh5-py-1.9.0.tar.gz
py39: commands_pre[0]> python -c "import sys; print('Used Python: Python %s on %s @ %s' % (sys.version, sys.platform, sys.executable))"
Used Python: Python 3.9.13 (tags/v3.9.13:6de2ca5, May 17 2022, 16:36:42) [MSC v.1929 64 bit (AMD64)] on win32 @ C:\git\dlh5-py\.tox\py39\Scripts\python.EXE
py39: commands_pre[1]> python -c "from pathlib import Path;Path('tests/tmp/py39').mkdir(parents=True, exist_ok=True)"
py39: commands[0]> python -Wdefault -b -m pytest tests/unit --in_memory=False --basetemp=\"tests/tmp/py39\" --junitxml=\"tests/tmp/results/junit-out_memory-py39.xml\" --junit-prefix=\"DLH5-py-py39\"
ARGS: ['tests/unit', '--in_memory=False', '--basetemp="tests/tmp/py39"', '--junitxml="tests/tmp/results/junit-out_memory-py39.xml"', '--junit-prefix="DLH5-py-py39"']
Traceback (most recent call last):
File "C:\Python39\lib\runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Python39\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\git\dlh5-py\.tox\py39\lib\site-packages\pytest\__main__.py", line 5, in <module>
raise SystemExit(pytest.console_main())
File "C:\git\dlh5-py\.tox\py39\lib\site-packages\_pytest\config\__init__.py", line 190, in console_main
code = main()
File "C:\git\dlh5-py\.tox\py39\lib\site-packages\_pytest\config\__init__.py", line 148, in main
config = _prepareconfig(args, plugins)
File "C:\git\dlh5-py\.tox\py39\lib\site-packages\_pytest\config\__init__.py", line 329, in _prepareconfig
config = pluginmanager.hook.pytest_cmdline_parse(
File "C:\git\dlh5-py\.tox\py39\lib\site-packages\pluggy\_hooks.py", line 265, in __call__
return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
File "C:\git\dlh5-py\.tox\py39\lib\site-packages\pluggy\_manager.py", line 80, in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
File "C:\git\dlh5-py\.tox\py39\lib\site-packages\pluggy\_callers.py", line 55, in _multicall
gen.send(outcome)
File "C:\git\dlh5-py\.tox\py39\lib\site-packages\_pytest\helpconfig.py", line 103, in pytest_cmdline_parse
config: Config = outcome.get_result()
File "C:\git\dlh5-py\.tox\py39\lib\site-packages\pluggy\_result.py", line 60, in get_result
raise ex[1].with_traceback(ex[2])
File "C:\git\dlh5-py\.tox\py39\lib\site-packages\pluggy\_callers.py", line 39, in _multicall
res = hook_impl.function(*args)
File "C:\git\dlh5-py\.tox\py39\lib\site-packages\_pytest\config\__init__.py", line 1058, in pytest_cmdline_parse
self.parse(args)
File "C:\git\dlh5-py\.tox\py39\lib\site-packages\_pytest\config\__init__.py", line 1346, in parse
self._preparse(args, addopts=addopts)
File "C:\git\dlh5-py\.tox\py39\lib\site-packages\_pytest\config\__init__.py", line 1214, in _preparse
self._initini(args)
File "C:\git\dlh5-py\.tox\py39\lib\site-packages\_pytest\config\__init__.py", line 1130, in _initini
ns, unknown_args = self._parser.parse_known_and_unknown_args(
File "C:\git\dlh5-py\.tox\py39\lib\site-packages\_pytest\config\argparsing.py", line 173, in parse_known_and_unknown_args
return optparser.parse_known_args(strargs, namespace=namespace)
File "C:\Python39\lib\argparse.py", line 1858, in parse_known_args
namespace, args = self._parse_known_args(args, namespace)
File "C:\Python39\lib\argparse.py", line 2067, in _parse_known_args
start_index = consume_optional(start_index)
File "C:\Python39\lib\argparse.py", line 2007, in consume_optional
take_action(action, args, option_string)
File "C:\Python39\lib\argparse.py", line 1919, in take_action
argument_values = self._get_values(action, argument_strings)
File "C:\Python39\lib\argparse.py", line 2450, in _get_values
value = self._get_value(action, arg_string)
File "C:\Python39\lib\argparse.py", line 2483, in _get_value
result = type_func(arg_string)
File "C:\git\dlh5-py\.tox\py39\lib\site-packages\_pytest\main.py", line 251, in validate_basetemp
if is_ancestor(Path.cwd().resolve(), Path(path).resolve()):
File "C:\Python39\lib\pathlib.py", line 1215, in resolve
s = self._flavour.resolve(self, strict=strict)
File "C:\Python39\lib\pathlib.py", line 215, in resolve
s = self._ext_to_normal(_getfinalpathname(s))
OSError: [WinError 123] The filename, directory name, or volume label syntax is incorrect: '"tests\\tmp\\py39"'
```
The pytest version in both cases is the same 7.2.0.
Since tox 4 is claimed to be backwards compatible I would assume this is a bug in tox.
## Environment
- OS: Win10 64-bit
| 0easy
|
Title: Add transforms from Augraphy
Body: For every transform:
1. Double check that there is no
2. Create separate Pull request.
3. Add the code and check where possible that numpy operations are replaced by the similar OpenCV code.
4. Use vectorized operations where possible.
5. Add link to the original implementation and where possible paper where the transform was intriduced.
https://augraphy.readthedocs.io/en/latest/doc/source/list_of_augmentations.html
- [ ] BadPhotoCopy
- [ ] BindingsAndFasteners
- [ ] Bleedthrough
- [ ] BrightnessTexturize
- [ ] ColorShift
- [ ] DelaunayTessellation
- [ ] DepthSimulatedBlur
- [ ] DirtyDrum
- [ ] DirtyRollers
- [ ] DirtyScreen
- [ ] DotMatrix
- [ ] DoubleExposure
- [ ] Faxify
- [ ] Hollow
- [ ] InkBleed
- [ ] InkColorSwap
- [ ] InkMottling
- [ ] LCDScreenPattern
- [ ] LensFlare
- [ ] Letterpress
- [ ] LightingGradient
- [ ] LinesDegradation
- [ ] LowInkPeriodicLines
- [ ] LowInkRandomLines
- [ ] LowLightNoise
- [ ] Markup
- [ ] Moire
- [ ] NoiseTexturize
- [ ] NoisyLines
- [ ] PatternGenerator
- [ ] ReflectedLight
- [ ] Scribbles
- [ ] ShadowCast
- [ ] SubtleNoise
- [ ] VoronoiTessellation
- [ ] WaterMark
- [ ] BookBinding
- [ ] Folding
- [ ] GlitchEffect
- [ ] InkShifter
- [ ] Rescale as Rescale2DPI
- [ ] SectionShift
| 0easy
|
Title: MongoDB Connector for PandasAI
Body: ### π The feature
Integrate a MongoDB connector into PandasAI to allow users to easily access and manipulate data stored in MongoDB databases. This connector will leverage PyMongoArrow to efficiently convert MongoDB query results into Apache Arrow, Numpy, and Pandas formats, improving performance during data manipulation and analysis.
### Motivation, pitch
T simplify the process of accessing and manipulating data stored in MongoDB for PandasAI users. Additionally, by utilizing PyMongoArrow, users will benefit from efficient data conversion between MongoDB and the common data formats used in data analysis, such as Apache Arrow, Numpy, and Pandas.
### Alternatives
### Additional context
| 0easy
|
Title: [Feature request] Add apply_to_images to ConstrainedCoarseDropout
Body: | 0easy
|
Title: Mention other, preferred, packages in docs
Body: [`QuantileRegressor`](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.QuantileRegressor.html#sklearn.linear_model.QuantileRegressor) made it into scikit-learn some time ago already.
As in the future other features could lose the _experimental status_ and be incorporated as well (I am looking at you [`TunedThresholdClassifier`](https://github.com/scikit-learn/scikit-learn/pull/26120)) should we plan to deprecate them in future versions of scikit-lego? | 0easy
|
Title: Feature Request: Add JSON logging to Robusta Runner
Body: We use Kibana for logs and use structured JSON logs where possible.
It would be great if the `runner` could be configured to emit JSON logs.
| 0easy
|
Title: [Feature]ε¨async_sessionη»ζεη»§η»δ½Ώη¨θ―₯对豑,ιθ¦ζ₯ι
Body: ```
async with curl_cffi.requests.AsyncSession() as c:
r1=await c.get('https://www.baidu.com')
print(r)
r2=await c.get('https://www.baidu.com')
```
ε¨ηζr2ζΆ, async_sessionε·²θ’«ε
³ι, ιθ¦ζεΊδΌθ―ε·²θ’«ε
³ιεΌεΈΈ
| 0easy
|
Title: Fix capitalisation of `Jupyter notebook` to `Jupyter Notebook`
Body: - Do a quick search and find for any `Jupyter notebook` version and replace with `Jupyter Notebook` where suitable | 0easy
|
Title: Offer order export from offer detail view only includes page 1 in export.
Body: If you go to the offer detail view in the dashboard, and this offer has been used in orders, you can generate an export.
See:
https://github.com/django-oscar/django-oscar/blob/1d1590f2a69ee75723482afb135b24c5c3523838/src/oscar/apps/dashboard/offers/views.py#L407-L412
However, if you got multiple pages of orders, only the first page will be in the export.
It makes more sense when you generate an export, it will include all the orders.
| 0easy
|
Title: [NEED HELP] UNABLE TO RUN THE CODE ON MY PC
Body: ### Issue summary
Effectively Running the Code issue
### Detailed description
Hello I am not tech savvy. I'm attempting to put the software/code in place so I can use it on a job application website (monster, fintalent, indeed etc.) Is there anyone I can hire/pay to run it for me for the uses i need?
### Steps to reproduce (if applicable)
_No response_
### Expected outcome
I EXPECT TO FIND SOEONE I CAN PAY TO RUN THE PROGRAM FOR MY DESIRED USES
### Additional context
USING THIS PROJECT AS A STEPPING STONE TO FINDING A DEVELOPER FOR A MORE INTENSE PROFITABLE PROJECT WITH A HIGH "FOR HIRE" PRICE. | 0easy
|
Title: Add DNS endpoint support for GKE Hooks and Operators
Body: ### Description
Current GKE Hook only supports IP based endpoints for accessing control plane of GKE, they should now also support access using DNS endpoint.
### Use case/motivation
A Cloud Composer environment that is in a separate VPC network than the GKE in question, cannot be accessed via internal IP endoints.
Enabling external IPs is not desired and setting up a bastion host is also not required anymore since access via DNS is now possible.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| 0easy
|
Title: Groups REST API - Add DELETE /api/groups/(int: id)/roles/(int: role_id)
Body: ### Describe the problem
https://docs.weblate.org/en/weblate-5.10.2/api.html#groups
`components`, `projects`, `languages`, `componentlists`, and `admins` all have their corresponding POST and DELETE methods.
`roles` has only POST method.
### Describe the solution you would like
Add `DELETE /api/groups/(int: id)/roles/(int: role_id)`
### Describe alternatives you have considered
_No response_
### Screenshots
_No response_
### Additional context
_No response_ | 0easy
|
Title: Bill Williams Fractals with Alligator Indicators
Body: **Which version are you running? The lastest version is on Github. Pip is for major releases.**
```python
import pandas_ta as ta
print(ta.version)
```
0.2.23b0
**Upgrade.**
```sh
$ pip install -U git+https://github.com/twopirllc/pandas-ta
```
0.2.28b0
**Is your feature request related to a problem? Please describe.**
No
**Describe the solution you'd like**
Implementation of Bill Williams Alligator and Fractals indicators
**Describe alternatives you've considered**
See below links
https://pandastechindicators.readthedocs.io/en/latest/#alligator
https://pandastechindicators.readthedocs.io/en/latest/#fractals
https://www.tradingview.com/script/lLgCdjag-Bill-Williams-Divergent-Bars/
**Additional context**
Some good code already available.
https://github.com/Tolkoton/WilliamsIndicators/blob/master/WilliamsIndicators.py
https://github.com/dmitriiweb/tapy
Also if possible implement crossover like in this => https://tulipindicators.org/list
| 0easy
|
Title: [Bug] Bark Model can not generate special sounds
Body: ### Describe the bug
I have encountered an issue where the output I receive differs from the output shown in the official repository, even though I am using the same prompt that is provided. This discrepancy is creating confusion and making it difficult for me to understand and compare my results with the expected outcomes.
my code
```
from TTS.api import TTS
import time
# Load the model to GPU
# Bark is really slow on CPU, so we recommend using GPU.
tts = TTS("tts_models/multilingual/multi-dataset/bark", gpu=True)
text = """
βͺ In the jungle, the mighty jungle, the lion barks tonight βͺ
"""
tts.tts_to_file(text=text,
file_path="output.wav",
voice_dir="bark-voices/",
speaker="en_speaker_9")
```
bark example code
```
from bark import SAMPLE_RATE, generate_audio, preload_models
from scipy.io.wavfile import write as write_wav
from IPython.display import Audio
# download and load all models
preload_models()
# generate audio from text
text_prompt = """
βͺ In the jungle, the mighty jungle, the lion barks tonight βͺ
"""
audio_array = generate_audio(text_prompt)
# save audio to disk
write_wav("bark_generation.wav", SAMPLE_RATE, audio_array)
# play text in notebook
Audio(audio_array, rate=SAMPLE_RATE)
```
TTS can not generate the song
### To Reproduce
python code
```
from TTS.api import TTS
import time
# Load the model to GPU
# Bark is really slow on CPU, so we recommend using GPU.
tts = TTS("tts_models/multilingual/multi-dataset/bark", gpu=True)
text = """
βͺ In the jungle, the mighty jungle, the lion barks tonight βͺ
"""
tts.tts_to_file(text=text,
file_path="output.wav",
voice_dir="bark-voices/",
speaker="en_speaker_9")
```
### Expected behavior
generate the song
### Logs
```shell
no error outputs
```
### Environment
```shell
{
"CUDA": {
"GPU": [
"NVIDIA A10"
],
"available": true,
"version": "11.7"
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.0.1+cu117",
"TTS": "0.17.8",
"numpy": "1.22.0"
},
"System": {
"OS": "Linux",
"architecture": [
"64bit",
"ELF"
],
"processor": "x86_64",
"python": "3.10.13",
"version": "#74~20.04.1-Ubuntu SMP Wed Feb 22 14:52:34 UTC 2023"
}
}
```
### Additional context
_No response_ | 0easy
|
Title: Add celery
Body: | 0easy
|
Title: Better error message when missing the IRKernel
Body: When running an R pipeline, an error will show if the IRKernel is not installed, it'd be better to improve the error message stating that that the kernel is missing and the link to the installation instructions
https://docs.ploomber.io/en/latest/user-guide/r-support.html | 0easy
|
Title: [Feature request] Add apply_to_images to ISONoise
Body: | 0easy
|
Title: Enhanced object creation, using column references
Body: When instantiating a Table object, we do this (like in 99% of ORMs):
```python
Band(name="Pythonistas", popularity=1000)
```
Some Piccolo queries allow you to pass in a dictionary mapping column references to values, instead of using kwargs. It's nice for tab completion, and also for catching errors.
```python
Band.update({
Band.popularity: 2000
}).run_sync()
```
It would be good to have this ability for instantiating objects too. Something like:
```python
Band(_data={Band.name: "Pythonistas", Band.popularity: 2000})
``` | 0easy
|
Title: [Feature request] Add apply_to_images to BboxSafeRandomCrop
Body: | 0easy
|
Title: Custom labels for get_topic_tree
Body: ### Discussed in https://github.com/MaartenGr/BERTopic/discussions/2122
<div type='discussions-op-text'>
<sup>Originally posted by **clfkenny** August 17, 2024</sup>
Hi there, I noticed the `visualize_hierarchy` method has a `custom_labels` parameter. Is there anything similar to this for the `get_topic_tree` method? I use the LLM generated labels as the custom labels so the hierarchy tree is much easier to interpret. Thanks! </div>
This seems like a good first issue for someone wanting to contribute to this repository. It has a very similar structure to the `visualize_hierarchy` functionality, so I believe it should be straightforward to implement. Unfortunately, I have a large backlog and cannot prioritize this, so if anyone wants to help out then that would be great! | 0easy
|
Title: 'Last run X minutes ago' compares UTC to naive timestamp
Body: This means they're incorrect unless your local timezone is equal to UTC.
In general, any newly generated report will have a 'last run' time of
babel.dates.format_timedelta(datetime.datetime.utcnow() - datetime.datetime.now())
and will just show your difference from UTC. | 0easy
|
Title: Implement Graceful Cache Failure Handling and Logging in Initialization
Body: ## Issue Summary
Currently, when the cache fails, an "AssertionError: You must call init first!" error is thrown, which is not user-friendly. Instead, the system should provide a warning message indicating that the results are not cached. Additionally, during the initialization function, it should log whether the connection to Redis was successful or not.
## Steps to Reproduce the Issue
1. Attempt to access cache results without initializing the cache.
2. Observe the resulting error message.
### Observed Behavior
An "AssertionError: You must call init first!" is thrown, disrupting the user experience.
## Expected Behavior
1. If the cache fails, a warning message should be displayed, informing the user that the results are not cached.
2. During the cache initialization, the system should log whether the connection to Redis was established successfully.
## Suggested Enhancements
### Graceful Failure Handling
- Modify the cache access logic to check if the cache has been initialized.
- If not initialized, return a warning message instead of throwing an error.
- Example warning message: "Cache is not initialized. Results are not cached."
### Logging Connection Status
- Enhance the init function to log the status of the connection to Redis.
- Example log messages:
- "Successfully connected to Redis."
- "Failed to connect to Redis."
| 0easy
|
Title: replace load_blueprints to a better version
Body: ``` python
# coding: utf-8
import os
import random
import logging
from werkzeug.utils import import_string
logger = logging.getLogger()
def load_blueprints_from_folder(app, folder_path=None, object_name=None):
folder_path = (
folder_path or
os.path.join(app.root_path, 'blueprints')
)
object_name = object_name or app.config.get(
'BLUEPRINTS_OBJECT_NAME', 'blueprint'
)
dir_list = os.listdir(folder_path)
base_module_name = app.name.replace(".app", "")
loaded_blueprints = {}
for fname in dir_list:
is_valid_module = all([
not os.path.exists(os.path.join(folder_path, fname, 'DISABLED')),
os.path.isdir(os.path.join(folder_path, fname)),
os.path.exists(os.path.join(folder_path, fname, '__init__.py'))
])
if is_valid_module:
import_name = ".".join([base_module_name, fname, object_name])
try:
blueprint = import_string(import_name)
except ImportError:
app.logger.error("Cannot load %s blueprint", fname)
else:
loaded_blueprints[fname] = blueprint
if blueprint.name not in app.blueprints:
app.register_blueprint(blueprint)
else:
blueprint.name += str(random.getrandbits(8))
app.register_blueprint(blueprint)
logger.info("{0} modules loaded".format(loaded_blueprints.keys()))
```
| 0easy
|
Title: Selecting a use case selects a non-.py file
Body: 
| 0easy
|
Title: Add generation time from output.xml to `Result` object
Body: The `output.xml` has `<robot generated="year-month-dayTHH:MM:SS.milliseconds" β¦>` in the xml. It would be handy to expose the `generated` attribute from the RF result package. We have a need to use the `generated` attribute when reading test result from multiple runs in CI and plotting the results in time series manner. | 0easy
|
Title: [Bug]: ScalarFormatter cant be forced to use an offset of 1
Body: ### Bug summary
The ScalarFormatter class takes an argument useOffset, either True, which will determine an automatic offset, False, which disables an offset or an numeric value, which is the used as an offset.
Since in python 1 == True --> True, trying to pass a numeric offset value of 1 just enables the automatic offset.
https://matplotlib.org/stable/api/ticker_api.html#matplotlib.ticker.ScalarFormatter
### Code for reproduction
```Python
import matplotlib.pyplot as plt
import numpy as np
from matplotlib.ticker import ScalarFormatter
# Sample data
x = np.linspace(0, 10, 100)
y = np.linspace(0.999, 1.001, 100)
# Create the plot
fig, ax = plt.subplots()
ax.plot(x, y)
formatter = ScalarFormatter()
formatter.set_useOffset(1)
# identical to formatter = ScalarFormatter(useOffset=1)
ax.yaxis.set_major_formatter(formatter)
plt.show()
```
### Actual outcome

### Expected outcome
The y-Axis should be centered around zero, the expected offset of 1 is not used
### Additional information
Happens only for useOffset = 1.
If the y values are changed (i.e. y = np.linspace(0.99999, 1.00001, 100)), the automatic offset actually uses an offset of 1, so the bug is not obvious then.
### Operating system
Win 11
### Matplotlib Version
3.7.2
### Matplotlib Backend
module://matplotlib_inline.backend_inline
### Python version
_No response_
### Jupyter version
6.5.4
### Installation
conda | 0easy
|
Title: AttributeError: 'Merge' object has no attribute 'fit_transform'
Body: Hi everyone,
I got a problem when running the example from autokeras website. It returns "AttributeError: 'Merge' object has no attribute 'fit_transform'". It reports the same error when I do not use Merge(), say only use ConvBlock (output_node1). Any help would be appreciated.
Here is the code:
import autokeras as ak
from tensorflow.keras.datasets import mnist
input_node = ak.ImageInput()
output_node = ak.Normalization()(input_node)
output_node = ak.ImageAugmentation()(output_node)
output_node1 = ak.ConvBlock()(output_node)
output_node2 = ak.ResNetBlock(version='v2')(output_node)
output_node = ak.Merge()([output_node1, output_node2])
auto_model = ak.AutoModel(
inputs=input_node,
outputs=output_node,
max_trials=10)
(x_train, y_train), (x_test, y_test) = mnist.load_data()
print(x_train.shape) # (60000, 28, 28)
print(y_train.shape) # (60000,)
print(y_train[:3]) # array([7, 2, 1], dtype=uint8)
\# Feed the AutoModel with training data.
auto_model.fit(x_train, y_train)
\# Predict with the best model.
predicted_y = auto_model.predict(x_test)
\# Evaluate the best model with testing data.
print(auto_model.evaluate(x_test, y_test))
The version information and I use GPU:
python --version Python 3.6.9 .
tf.__version__ '2.1.0'
tf.keras.__version__ '2.2.4-tf'
autokeras-1.0.0.
# Here are all error messages:
AttributeError Traceback (most recent call last)
<ipython-input-9-0fc492058b0a> in <module>
21
22 # Feed the AutoModel with training data.
---> 23 auto_model.fit(x_train, y_train)
24 # Predict with the best model.
25 predicted_y = auto_model.predict(x_test)
~/software/miniconda3/envs/gpu/lib/python3.6/site-packages/autokeras/auto_model.py in fit(self, x, y, epochs, callbacks, validation_split, validation_data, **kwargs)
185 y=y,
186 validation_data=validation_data,
--> 187 validation_split=validation_split)
188
189 # Initialize the hyper_graph.
~/software/miniconda3/envs/gpu/lib/python3.6/site-packages/autokeras/auto_model.py in _prepare_data(self, x, y, validation_data, validation_split)
260 # TODO: Handle other types of input, zip dataset, tensor, dict.
261 # Prepare the dataset.
--> 262 dataset = self._process_xy(x, y, fit=True)
263 if validation_data:
264 self._split_dataset = False
~/software/miniconda3/envs/gpu/lib/python3.6/site-packages/autokeras/auto_model.py in _process_xy(self, x, y, fit, predict)
244 for data, head_block in zip(y, self.heads):
245 if fit:
--> 246 data = head_block.fit_transform(data)
247 else:
248 data = head_block.transform(data)
AttributeError: 'Merge' object has no attribute 'fit_transform' | 0easy
|
Title: Marketplace - Change "build AI agents and share you vision" font to Poppins font
Body: ### Describe your issue.
Can we change this to the Poppins font?
Font: poppins
weight: semibold
size: 48px
line-height: 54px
The "your" should be the same color as the other sentence, not purple π

| 0easy
|
Title: Make `secure_filename()` escape reserved Windows filenames
Body: @CaselIT noticed that we didn't hold Windows hand regarding device file names, like
https://github.com/pallets/werkzeug/blob/7868bef5d978093a8baa0784464ebe5d775ae92a/src/werkzeug/utils.py#L229-L237
(_Originally posted by @CaselIT in https://github.com/falconry/falcon/pull/2421#discussion_r1887617832_.)
| 0easy
|
Title: Duplicated pokemon-species in evolution-trigger #1
Body: Dear PokeAPI team,
**https://pokeapi.co/api/v2/evolution-trigger/1/** contains Magnezone 4 times. At index 192, 262, 294 and 298.
Some others like leafeon and glaceon do also exist more than once.
Is it possible to reduce the linked species that every species appears only once?
| 0easy
|
Title: [FEA] Quickly compute multiple algorithms
Body: **Is your feature request related to a problem? Please describe.**
I'll often write:
```python
g = g.compute_cugraph('weakly_connected_components')
g = g.compute_cugraph('pagerank')
g = g.compute_cugraph('betweenness_centrality')
```
We ought to have short-hands!
**Describe the solution you'd like**
Some ideas:
* `g.compute_cugraph(['pagerank', 'betweenness_centrality'])`
*
```python
g.compute_cugraph([
('pagerank', opts123),
'betweenness_centrality'
])
```
* `g.compute_cugraph(g.cugraph.node_labels)`
**Additional context**
Same likely applies to `compute_igraph()`
| 0easy
|
Title: [BUG][DOC] Translation wrong in cluster deployment
Body: <!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
Translation wrong in cluster deployment.
English version works well.

Chinese version is wrong.

Related file is:
https://github.com/mars-project/mars/blob/25d742793d7c1f5bde5190adbc8f0b7c833e59d3/docs/source/locale/zh_CN/LC_MESSAGES/installation/deploy.po#L219-L224 | 0easy
|
Title: OpenAI should automatically switch to the model with more context
Body: ### π The feature
Currently, when users encounter errors due to a context window that exceeds the model's capacity, they need to manually adjust their model choice to a larger version, which can be cumbersome and may disrupt the workflow. This feature would automate the process, ensuring that users get the best possible performance without the need for manual intervention.
## Proposed Solution
The proposed solution is to implement an automatic context model switching mechanism that checks the context window size and switches to a larger model if needed. Here's how it could work:
1. When a user sends a request with a specific model (e.g., "gpt-4"), the system checks the context window size.
2. If the context window size exceeds the capacity of the selected model (e.g., "gpt-4"), the system automatically switches to the corresponding larger model (e.g., "gpt-4-32k").
3. The system processes the request using the larger model to ensure the context fits within its capacity.
4. The user receives the response seamlessly, without having to manually change the model selection.
## Implementation Considerations
To implement this feature, OpenAI may need to develop a mechanism for context size detection and automatic model switching within the API infrastructure.
## Example Scenario
A user sends a request to the API using "gpt-4" but accidentally provides a very long context that exceeds the model's capacity. Instead of receiving an error, the system automatically switches to "gpt-4-32k" to accommodate the larger context, and the user receives a timely and accurate response.
### Motivation, pitch
1. Improved user experience: Users won't have to manually switch models when encountering context-related errors, making the interaction with OpenAI models more seamless.
2. Error prevention: Automatic context model switching can help prevent errors caused by users inadvertently exceeding the context window of their selected model.
3. Efficient use of resources: By automatically selecting the appropriate model based on context size, the system can make efficient use of computational resources.
### Alternatives
_No response_
### Additional context
This feature could be particularly useful for applications that involve dynamic and variable-length input contexts, such as chatbots, language translation, and content generation. | 0easy
|
Title: Enhancement: Support all kwargs for `model_dump` in PydanticPlugin
Body: ### Summary
`litestar.contrib.pydantic.PydanticPlugin` has the argument `prefer_alias` which sets the `by_alias` argument of the created Pydantic type encoders. The type encoders are based on `pydantic.BaseModel.model_dump` which also has the arguments `exclude_none` and `exclude_defaults`. This enhancement suggests to add similar arguments to `PydanticPlugin` to configure the type encoders.
### Basic Example
```python
import pydantic
import typing
import litestar
import litestar.contrib.pydantic
class MyModel(pydantic.BaseModel):
some_number: int = pydantic.Field(alias="someNumber")
title: str = "a number"
comment: typing.Optional[str] = None
@litestar.get("/")
def get_stuff() -> MyModel:
return MyModel(some_number=42)
app = litestar.Litestar(
route_handlers=[get_stuff],
plugins=[
litestar.contrib.pydantic.PydanticPlugin(
prefer_alias=True,
# NOTE: The argument names are just provisional
exclude_none=True, # new
exclude_defaults=True, # new
)
],
)
```
A GET request to `/` would return `{"someNumber":42}` because of the plugin configuration. Without it, it would return `{"some_number":42,"title":"a number","comment":null}`.
### Drawbacks and Impact
You can currently achieve the same result without this feature by adding a custom type encoder for `pydantic.BaseModel`. However, it must be combined with `PydanticPlugin` for the generated schema to use aliases.
```python
import pydantic
import typing
import litestar
import litestar.contrib.pydantic
class MyModel(pydantic.BaseModel):
some_number: int = pydantic.Field(alias="someNumber")
title: str = "a number"
comment: typing.Optional[str] = None
@litestar.get("/")
def get_stuff() -> MyModel:
return MyModel(some_number=42)
app = litestar.Litestar(
route_handlers=[get_stuff],
plugins=[litestar.contrib.pydantic.PydanticPlugin(prefer_alias=True)],
type_encoders={
pydantic.BaseModel: lambda x: x.model_dump(mode="json", by_alias=True, exclude_none=True, exclude_default=True),
},
)
```
Therefore, the only purpose of this enhancement is to make this use case (which I believe to be a common one) simpler to express in code by making the plugin able to do more, reducing the need for type encoders. The community must decide if this is desirable or not.
As long as the default values of the new arguments match the old behavior, this should not cause any issues with backwards compatibility.
### Unresolved questions
I can see that there may be concerns about having to support a growing list of `pydantic.BaseModel.model_dump` arguments (for example `exlude_unset`). I believe that concern to be valid, but you could always draw a line and say "the plugin supports these extremely common arguments, if you want more, use a custom type encoder". Still, the concern is valid and I believe it should be part of the discussion. | 0easy
|
Title: NumbaDeprecationWarning: numba.generated_jit is deprecated.
Body: ### Current Behaviour
This is a follow-on to https://github.com/ydataai/ydata-profiling/issues/1419 .
Create new environment with just ydata-profiling as described there, see `python -c 'import ydata_profiling'` succeed silently, and then add:
```
$ conda install -c conda-forge numba
...
The following NEW packages will be INSTALLED:
libllvm14 conda-forge/osx-64::libllvm14-14.0.6-hc8e404f_4
llvmlite conda-forge/osx-64::llvmlite-0.40.1-py311hcbb5c6d_0
numba conda-forge/osx-64::numba-0.57.1-py311h5a8220d_0
```
Now make the simplest possible usage of ydata-profiling and see it trigger warnings:
```
$ conda list | grep numb && python -c 'import ydata_profiling'
numba 0.57.1 py311h5a8220d_0 conda-forge
/Users/jhanley/miniconda3/envs/ydata_env/lib/python3.11/site-packages/numba/core/decorators.py:262: NumbaDeprecationWarning: numba.generated_jit is deprecated. Please see the documentation at: https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-generated-jit for more information and advice on a suitable replacement.
warnings.warn(msg, NumbaDeprecationWarning)
/Users/jhanley/miniconda3/envs/ydata_env/lib/python3.11/site-packages/visions/backends/shared/nan_handling.py:50: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
@nb.jit
```
### Expected Behaviour
A simple import should silently succeed, with no warnings.
### Data Description
[empty set]
### Code that reproduces the bug
```Python
(as above)
```
### pandas-profiling version
4.5.0
### Dependencies
```Text
Same as in #1419, with 3 packages added as shown above.
For the sake of completeness, here is the full list.
$ conda list
# packages in environment at /Users/jhanley/miniconda3/envs/ydata_env:
#
# Name Version Build Channel
attrs 23.1.0 pyh71513ae_1 conda-forge
brotli 1.0.9 hb7f2c08_9 conda-forge
brotli-bin 1.0.9 hb7f2c08_9 conda-forge
brotli-python 1.0.9 py311h814d153_9 conda-forge
bzip2 1.0.8 h0d85af4_4 conda-forge
ca-certificates 2023.7.22 h8857fd0_0 conda-forge
certifi 2023.7.22 pyhd8ed1ab_0 conda-forge
charset-normalizer 3.2.0 pyhd8ed1ab_0 conda-forge
colorama 0.4.6 pyhd8ed1ab_0 conda-forge
contourpy 1.1.0 py311h5fe6e05_0 conda-forge
cycler 0.11.0 pyhd8ed1ab_0 conda-forge
dacite 1.8.0 pyhd8ed1ab_0 conda-forge
dataclasses 0.8 pyhc8e2a94_3 conda-forge
fonttools 4.42.0 py311h2725bcf_0 conda-forge
freetype 2.12.1 h3f81eb7_1 conda-forge
htmlmin 0.1.12 py_1 conda-forge
idna 3.4 pyhd8ed1ab_0 conda-forge
imagehash 4.3.1 pyhd8ed1ab_0 conda-forge
jinja2 3.1.2 pyhd8ed1ab_1 conda-forge
joblib 1.3.2 pyhd8ed1ab_0 conda-forge
kiwisolver 1.4.4 py311hd2070f0_1 conda-forge
lcms2 2.15 h2dcdeff_1 conda-forge
lerc 4.0.0 hb486fe8_0 conda-forge
libblas 3.9.0 17_osx64_openblas conda-forge
libbrotlicommon 1.0.9 hb7f2c08_9 conda-forge
libbrotlidec 1.0.9 hb7f2c08_9 conda-forge
libbrotlienc 1.0.9 hb7f2c08_9 conda-forge
libcblas 3.9.0 17_osx64_openblas conda-forge
libcxx 16.0.6 hd57cbcb_0 conda-forge
libdeflate 1.18 hac1461d_0 conda-forge
libexpat 2.5.0 hf0c8a7f_1 conda-forge
libffi 3.4.2 h0d85af4_5 conda-forge
libgfortran 5.0.0 12_3_0_h97931a8_1 conda-forge
libgfortran5 12.3.0 hbd3c1fe_1 conda-forge
libjpeg-turbo 2.1.5.1 hb7f2c08_0 conda-forge
liblapack 3.9.0 17_osx64_openblas conda-forge
libllvm14 14.0.6 hc8e404f_4 conda-forge
libopenblas 0.3.23 openmp_h429af6e_0 conda-forge
libpng 1.6.39 ha978bb4_0 conda-forge
libsqlite 3.42.0 h58db7d2_0 conda-forge
libtiff 4.5.1 hf955e92_0 conda-forge
libwebp-base 1.3.1 h0dc2134_0 conda-forge
libxcb 1.15 hb7f2c08_0 conda-forge
libzlib 1.2.13 h8a1eda9_5 conda-forge
llvm-openmp 16.0.6 hff08bdf_0 conda-forge
llvmlite 0.40.1 py311hcbb5c6d_0 conda-forge
markupsafe 2.1.3 py311h2725bcf_0 conda-forge
matplotlib-base 3.7.1 py311h2bf763f_0 conda-forge
multimethod 1.4 py_0 conda-forge
munkres 1.1.4 pyh9f0ad1d_0 conda-forge
ncurses 6.4 hf0c8a7f_0 conda-forge
networkx 3.1 pyhd8ed1ab_0 conda-forge
numba 0.57.1 py311h5a8220d_0 conda-forge
numpy 1.23.5 py311h62c7003_0 conda-forge
openjpeg 2.5.0 h13ac156_2 conda-forge
openssl 3.1.2 h8a1eda9_0 conda-forge
packaging 23.1 pyhd8ed1ab_0 conda-forge
pandas 2.0.3 py311hab14417_1 conda-forge
patsy 0.5.3 pyhd8ed1ab_0 conda-forge
phik 0.12.3 py311h0482ae9_0 conda-forge
pillow 10.0.0 py311h7cb0e2d_0 conda-forge
pip 23.2.1 pyhd8ed1ab_0 conda-forge
platformdirs 3.10.0 pyhd8ed1ab_0 conda-forge
pooch 1.7.0 pyha770c72_3 conda-forge
pthread-stubs 0.4 hc929b4f_1001 conda-forge
pybind11-abi 4 hd8ed1ab_3 conda-forge
pydantic 1.10.12 py311h2725bcf_1 conda-forge
pyparsing 3.1.1 pyhd8ed1ab_0 conda-forge
pysocks 1.7.1 pyha2e5f31_6 conda-forge
python 3.11.4 h30d4d87_0_cpython conda-forge
python-dateutil 2.8.2 pyhd8ed1ab_0 conda-forge
python-tzdata 2023.3 pyhd8ed1ab_0 conda-forge
python_abi 3.11 3_cp311 conda-forge
pytz 2023.3 pyhd8ed1ab_0 conda-forge
pywavelets 1.4.1 py311hd5badaa_0 conda-forge
pyyaml 6.0 py311h5547dcb_5 conda-forge
readline 8.2 h9e318b2_1 conda-forge
requests 2.31.0 pyhd8ed1ab_0 conda-forge
scipy 1.10.1 py311h16c3c4d_3 conda-forge
seaborn 0.12.2 hd8ed1ab_0 conda-forge
seaborn-base 0.12.2 pyhd8ed1ab_0 conda-forge
setuptools 68.0.0 pyhd8ed1ab_0 conda-forge
six 1.16.0 pyh6c4a22f_0 conda-forge
statsmodels 0.14.0 py311h4a70a88_1 conda-forge
tangled-up-in-unicode 0.2.0 pyhd8ed1ab_0 conda-forge
tk 8.6.12 h5dbffcc_0 conda-forge
tqdm 4.66.0 pyhd8ed1ab_0 conda-forge
typeguard 2.13.3 pyhd8ed1ab_0 conda-forge
typing-extensions 4.7.1 hd8ed1ab_0 conda-forge
typing_extensions 4.7.1 pyha770c72_0 conda-forge
tzdata 2023c h71feb2d_0 conda-forge
urllib3 2.0.4 pyhd8ed1ab_0 conda-forge
visions 0.7.5 pyhd8ed1ab_0 conda-forge
wheel 0.41.1 pyhd8ed1ab_0 conda-forge
wordcloud 1.9.2 py311h2725bcf_1 conda-forge
xorg-libxau 1.0.11 h0dc2134_0 conda-forge
xorg-libxdmcp 1.1.3 h35c211d_0 conda-forge
xz 5.2.6 h775f41a_0 conda-forge
yaml 0.2.5 h0d85af4_2 conda-forge
ydata-profiling 4.5.0 pyhd8ed1ab_0 conda-forge
zstd 1.5.2 h829000d_7 conda-forge
```
```
### OS
MacOS Monterey 12.6.8
### Checklist
- [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [X] The issue has not been resolved by the entries listed under [Common Issues](https://pandas-profiling.ydata.ai/docs/master/pages/support_contrib/common_issues.html). | 0easy
|
Title: Time to Close metric API
Body: The canonical definition is here: https://chaoss.community/?p=3446 | 0easy
|
Title: New Contributors metric API
Body: The canonical definition is here: https://chaoss.community/?p=3613 | 0easy
|
Title: Queries that Return Zero Rows Raise Exception
Body: e.g. if you run:
```python
from sqlalchemy.engine import create_engine
engine = create_engine("shillelagh://")
connection = engine.connect()
query = """SELECT country, SUM(cnt)
FROM "https://docs.google.com/spreadsheets/d/1_rN3lm0R_bU3NemO0s9pbFkY5LQPcuy1pscv8ZXPtg8/edit#gid=0"
WHERE cnt > 10
GROUP BY country"""
for row in connection.execute(query):
print(row)
```
you get:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/alex/.pyenv/versions/my-app/lib/python3.10/site-packages/sqlalchemy/engine/result.py", line 964, in __iter__
return self._iter_impl()
File "/Users/alex/.pyenv/versions/my-app/lib/python3.10/site-packages/sqlalchemy/engine/result.py", line 637, in _iter_impl
return self._iterator_getter(self)
File "/Users/alex/.pyenv/versions/my-app/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 1180, in __get__
obj.__dict__[self.__name__] = result = self.fget(obj)
File "/Users/alex/.pyenv/versions/my-app/lib/python3.10/site-packages/sqlalchemy/engine/result.py", line 361, in _iterator_getter
make_row = self._row_getter
File "/Users/alex/.pyenv/versions/my-app/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 1180, in __get__
obj.__dict__[self.__name__] = result = self.fget(obj)
File "/Users/alex/.pyenv/versions/my-app/lib/python3.10/site-packages/sqlalchemy/engine/result.py", line 320, in _row_getter
keymap = metadata._keymap
File "/Users/alex/.pyenv/versions/my-app/lib/python3.10/site-packages/sqlalchemy/engine/cursor.py", line 1208, in _keymap
self._we_dont_return_rows()
File "/Users/alex/.pyenv/versions/my-app/lib/python3.10/site-packages/sqlalchemy/engine/cursor.py", line 1189, in _we_dont_return_rows
util.raise_(
File "/Users/alex/.pyenv/versions/my-app/lib/python3.10/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
sqlalchemy.exc.ResourceClosedError: This result object does not return rows. It has been closed automatically
``` | 0easy
|
Title: Optuna Mode doesn't work when tuning LightGBM
Body: Great package, I love that it supports a wide range of functionalities.
However, when I try to create an AutoML instance in Optuna mode for Light GBM, it fails and gives the following error message, it works when I use other ML models though:
```[I 2023-12-11 18:14:57,186] A new study created in memory with name: no-name-b35aa835-4503-402c-ad99-b0130c103afb
[W 2023-12-11 18:14:57,553] Trial 0 failed with parameters: {'learning_rate': 0.1, 'num_leaves': 1598, 'lambda_l1': 2.840098794801191e-06, 'lambda_l2': 3.0773599420974e-06, 'feature_fraction': 0.8613105322932351, 'bagging_fraction': 0.970697557159987, 'bagging_freq': 7, 'min_data_in_leaf': 36, 'extra_trees': False} because of the following error: The value None could not be cast to float..
[W 2023-12-11 18:14:57,554] Trial 0 failed with value None.
```
These are the parameter settings for AutoML:
```
automl = AutoML(
mode="Optuna",
eval_metric="f1",
golden_features=False,
ml_task='binary_classification',
kmeans_features=False,
start_random_models=1,
stack_models=False,
train_ensemble=False,
optuna_time_budget=100,
optuna_verbose=True,
features_selection=False,
algorithms=["LightGBM"],
validation_strategy={
"validation_type": "kfold",
"k_folds": 3,
"shuffle": True,
"stratify": True,
}
```
It would be great if you could help me with this. | 0easy
|
Title: better error message when NotebookRunner is initialized with a string instead of a path
Body: `NotebookRunner` expects a string with Python code or a `pathlib.Path` object to a `.py` file. However, new users may do this:
```python
NotebookRunner('script.py')
```
But instead they should do:
```python
from pathlib import Path
NotebookRunner(Path('script.py'))
```
If passed a string, we should try to determine if the arguments "looks like a path to a file", and raise an error accordingly: "perhaps you meant passing a pathlib.path object".
| 0easy
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.