text
stringlengths 20
57.3k
| labels
class label 4
classes |
---|---|
Title: ATR/RMA differences with TradingView
Body: Let's take weekly BINANCE:BTCUSDT indicator since beginning (14.08.2017) and do ATR with period=7 and RMA with period=7.
Python results:
```python
import pandas as pd
import numpy as np
import pandas_ta as ta
df = pd.DataFrame({'datetime': {0: '2017-08-14 00:00:00', 1: '2017-08-21 00:00:00', 2: '2017-08-28 00:00:00', 3: '2017-09-04 00:00:00', 4: '2017-09-11 00:00:00', 5: '2017-09-18 00:00:00', 6: '2017-09-25 00:00:00', 7: '2017-10-02 00:00:00', 8: '2017-10-09 00:00:00', 9: '2017-10-16 00:00:00', 10: '2017-10-23 00:00:00', 11: '2017-10-30 00:00:00', 12: '2017-11-06 00:00:00', 13: '2017-11-13 00:00:00'}, 'open': {0: 4261.48, 1: 4069.13, 2: 4310.01, 3: 4505.0, 4: 4153.62, 5: 3690.0, 6: 3660.02, 7: 4400.0, 8: 4640.0, 9: 5710.0, 10: 5975.0, 11: 6133.01, 12: 7345.1, 13: 5839.94}, 'high': {0: 4485.39, 1: 4453.91, 2: 4939.19, 3: 4788.59, 4: 4394.59, 5: 4123.2, 6: 4406.52, 7: 4658.0, 8: 5922.3, 9: 6171.0, 10: 6189.88, 11: 7590.25, 12: 7770.02, 13: 8123.15}, 'low': {0: 3850.0, 1: 3400.0, 2: 4124.54, 3: 3603.0, 4: 2817.0, 5: 3505.55, 6: 3653.69, 7: 4110.0, 8: 4550.0, 9: 5037.95, 10: 5286.98, 11: 6030.0, 12: 5325.01, 13: 5699.99}, 'close': {0: 4086.29, 1: 4310.01, 2: 4509.08, 3: 4130.37, 4: 3699.99, 5: 3660.02, 6: 4378.48, 7: 4640.0, 8: 5709.99, 9: 5950.02, 10: 6169.98, 11: 7345.01, 12: 5811.03, 13: 8038.0}})
atr = df.ta.atr(length=7)
rma = df.ta.rma(length=7)
print(atr)
0 635.390000
1 860.746923
2 842.961496
3 949.315864
4 1116.350102
5 998.287017
6 945.164499
7 865.099081
8 961.674618
9 992.824763
10 977.091686
11 1075.946678
12 1301.999156
13 1483.088680
Name: ATR_7, dtype: float64
print(rma)
0 4086.290000
1 4206.754615
2 4323.399843
3 4263.481982
4 4113.670859
5 4006.272806
6 4086.826964
7 4198.342546
8 4486.173579
9 4752.250467
10 5000.293910
11 5397.762117
12 5465.998719
13 5881.427506
Name: RMA_7, dtype: float64
```
Now compare to TradingView built-in ATR (its using RMA)
Here is pine script
```
//@version=4
study("My Script")
plot(atr(7), title="ATR")
plot(rma(close, 7), title="RMA")
```
**TradingView results are:
ATR:**
NaN
NaN
NaN
NaN
NaN
NaN
NaN
948.2300000000000
891.0542857142860
959.8036734693880
984.5531486880470
972.8884131611830
1056.797211281010
1255.1133239551500
**RMA:**
NaN
NaN
NaN
NaN
NaN
NaN
NaN
4110.605714285710
4186.233469387760
4403.912973760930
4624.7854060808000
4845.527490926400
5202.596420794060
5289.515503537760
To be honest I'm little bit lost and can't find issue cause.
P.S. sorry for incorrectly previously opened issue, i clicked "bug" occasionally and topic was like a discussion with other users if anyone seen this.
| 0easy
|
Title: An error occurred (IllegalLocationConstraintException) during zappa deploy
Body: See https://github.com/Miserlou/Zappa/issues/569
I also experienced this issue when attempting to deploy following the out-of-the-box README commands. Zappa's deploy procedure is obfuscating the cause of the issue, namely that my bucket name is not unique. It would be nice if Zappa would suggest this as a possible issue source when the deploy command is ran. | 0easy
|
Title: Add doctest to pytest
Body: | 0easy
|
Title: RSI_14 not the same ?
Body: **Which version are you running? The lastest version is on Github. Pip is for major releases.**
```python
import pandas_ta as ta
print(ta.version)
```
**Do you have _TA Lib_ also installed in your environment?**
```sh
$ pip list
```
**Did you upgrade? Did the upgrade resolve the issue?**
```sh
$ pip install -U git+https://github.com/twopirllc/pandas-ta
```
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Provide sample code.
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Additional context**
Add any other context about the problem here.
Thanks for using Pandas TA!
Hi,
I just noted that the RSI_14 from pandas_ta does not give the same values as I can get from trade view and my own formula.
This is from pandas_ta:
```sh
TSLA RSI_14 from pandas_ta
2022-06-06 42.563087
2022-06-07 42.810747
2022-06-08 44.086149
2022-06-09 43.331821
2022-06-10 40.733739
*2022-06-13 35.655047
2022-06-14 38.245809
2022-06-15 43.956400
2022-06-16 37.774685
2022-06-17 39.460991
2022-06-21 47.886861
2022-06-22 47.552924
2022-06-23 47.173791
2022-06-24 51.528079
2022-06-27 51.192003
2022-06-28 46.142342
```
this is from my own formula:
TSLA rsi from formula
```sh
2022-06-06 42.489825
2022-06-07 42.755708
2022-06-08 44.122326
2022-06-09 43.315136
2022-06-10 40.549813
*2022-06-13 35.209665
2022-06-14 37.958873
2022-06-15 43.975053
2022-06-16 37.533967
2022-06-17 39.295094
2022-06-21 48.035847
2022-06-22 47.689386
2022-06-23 47.296236
2022-06-24 51.775597
2022-06-27 51.427470
2022-06-28 46.213699
```
This is the formula I use (gives the same as tradeview.com)
```python
def RSI( df, window_length=14):
close = df['Close']
delta = close.diff()
up = delta.clip(lower=0)
down = -1 * delta.clip(upper=0)
# WMA
roll_up = up.ewm(com=window_length - 1, adjust=True, min_periods=window_length).mean()
roll_down = down.ewm(com=window_length - 1, adjust=True, min_periods=window_length).mean()
# Calculate the RSI
RS = roll_up / roll_down
return 100.0 - (100.0 / (1.0 + RS))
```
I am trying to detect minimum points in RSI and above I have marked with '*' the different values from pandas_ta and my own.
| 0easy
|
Title: Possible bug plotting CSAPR-2 HSRHIs
Body: In plotting CSAPR-2 HSRHIs from CACTI, I am having issues with plotting RHIs at 90° azimuth with `display.plot_rhi()`. This behavior is repeatable amongst many files, but only at this azimuth angle. Seems like a geometry issue.
Example is here:

Interestingly, `radar.get_gate_x_y_z(sweep=3)` seems to extract ok values:

I'll keep digging... | 0easy
|
Title: [BUG] feature_utils unit tests failing mypy
Body: **Describe the bug**
Regular PR fails on mypy analysis of feature_utils: https://github.com/graphistry/pygraphistry/actions/runs/9800176651/job/27061697437?pr=573
**PyGraphistry API client environment**
- Where run: GHA CI
- Version: 0.33.8 (main)
**Additional context**
```
+ mypy --config-file mypy.ini graphistry
graphistry/feature_utils.py:39: error: Cannot assign to a type [misc]
graphistry/feature_utils.py:39: error: Incompatible types in assignment (expression has type "object", variable has type "Type[SentenceTransformer]") [assignment]
```
=>
https://github.com/graphistry/pygraphistry/blob/53448d4ef153fd262466087a951bc28a44c8fadf/graphistry/feature_utils.py#L39
on
```python
if TYPE_CHECKING:
MIXIN_BASE = ComputeMixin
try:
from sklearn.pipeline import Pipeline
except:
Pipeline = Any
try:
from sentence_transformers import SentenceTransformer
except:
SentenceTransformer = Any
try:
from dirty_cat import (
SuperVectorizer,
GapEncoder,
SimilarityEncoder,
)
except:
SuperVectorizer = Any
GapEncoder = Any
SimilarityEncoder = Any
try:
from sklearn.preprocessing import FunctionTransformer
from sklearn.base import BaseEstimator, TransformerMixin
except:
FunctionTransformer = Any
BaseEstimator = object
TransformerMixin = object
else:
MIXIN_BASE = object
Pipeline = Any
SentenceTransformer = Any
SuperVectorizer = Any
GapEncoder = Any
SimilarityEncoder = Any
FunctionTransformer = Any
BaseEstimator = Any
TransformerMixin = Any
```
@tanmoyio @mj3cheun @silkspace can one of you take? | 0easy
|
Title: Efficient Metropolis Adjusted Langevin Algorithm (MALA) Sampler?
Body: Hi, I noticed that we have a `MetropolisAdjustedLangevinAlgorithm` via `TFPKernel`. However, it seems like the latter is not very optimized (the runtime is very slow). I'm curious if there are plans to implement MALA natively using `numpyro`?
It also seems like MALA can be implemented as HMC [with just one leapfrog step](https://github.com/tensorflow/probability/blob/main/spinoffs/oryx/oryx/experimental/mcmc/kernels.py#L191-L194). Would that be the case in `numpyro` as well? Notably, I don't think the current HMC implementation has an explicit knob to set the number of leapfrog steps.
Thanks in advance for the help! | 0easy
|
Title: execute_script does not support *args
Body: The [Python Selenium version of execute_script](http://selenium-python.readthedocs.io/api.html#selenium.webdriver.remote.webdriver.WebDriver.execute_script) supports `*args` in addition to the script argument but the Splinter execute_script only accepts the script argument.
| 0easy
|
Title: Signal line implemented for TRIX instead of TSI
Body: **Which version are you running?**
0.3.2b0
**Describe the bug**
The signal line seems to have been implemented for TRIX instead of TSI.
**Expected behavior**
According to the definitions on investopedia for:
TRIX: https://www.investopedia.com/terms/t/trix.asp
TSI: https://www.investopedia.com/terms/t/tsi.asp,
there should be a signal line for TSI and no signal line for TRIX.
The in-built indicators on TradingView also displays the expected behavior:

**Screenshots**
 | 0easy
|
Title: Update /help
Body: /help command is currently out of date, it doesn't include new features such as /code chat and /internet chat, and some of the command names and syntax might be wrong now too. Just needs a quick revamp | 0easy
|
Title: Fix the Errors/Warnings when building Qlib's documentation.
Body: ## 📖 Documentation
<!-- Please specify whether it's tutorial part or API reference part, and describe it.-->

| 0easy
|
Title: Axes sizes should always match signal shape
Body: We should ensure that the axes `size` and the number of axes provided to a signal, actually matches the data shape.
Note that there are a lot of new features coming in #2399.
While one could theoretically provide flexible-size (following signal shape) to `UniformDataAxis` and `FunctionalDataAxis`, `DataAxis` requires a fixed-length array. I think it is better that we for now expect the `size` attribute to match the signal shape when an axis is passed to a signal, and then we can consider a more flexible approach in the future.
As an example of where this can go wrong, currently the following does not raise any errors:
```python
import hyperspy.api as hs
import numpy as np
data = np.zeros((5, 10))
ax1 = {'size' : 20}
ax2 = {'size' : 30}
ax3 = {'size' : 40}
axes = [ax1, ax2, ax3]
s = hs.signals.Signal1D(data, axes=axes)
print(s)
# <Signal1D, title: , dimensions: (30, 1|40)>
print(s.data.shape)
# (5, 10) # very different from the hyperspy shape
```
Should be as simple as looping through the axes, and comparing its `size` with `s.data.shape[ax.index_in_array]`, as well as checking the number of axes vs `len(data.shape)`.
I will make a PR to #2399. | 0easy
|
Title: Update Documentation: Remove `--config`
Body: ### Describe the bug
The 0.2.1 update has moved OI from using a config to a more powerful and robust system using profiles. There are a couple of instances in the docs where this change was missed. Please update the docs so that profiles are referenced instead of config.
### Reproduce
1. Go to the documentation: https://docs.openinterpreter.com/getting-started/introduction
2. Search for 'config'
3. See instances of 'config' - such as https://docs.openinterpreter.com/usage/terminal/arguments#configuration
### Expected behavior
No `--config`, just `--profile`
### Screenshots
_No response_
### Open Interpreter version
0.2.2
### Python version
3.11.3
### Operating System name and version
MacOS 14.3
### Additional context
_No response_ | 0easy
|
Title: Get all 1st & 3rd party cookies and not only 1st party cookies
Body: Hello,
I am trying to scan a website and get the list of all cookies being registered on the browser.
Currently pyppeteer only provide me first party but when going to a specific website I am seeing on the chrome console 3rd party as well.
```
def logit(event):
req = event._request
#print("{0} - {1}".format(req.url, event._status))
request_listing.append(req.url)
async def main():
browser = await launch({"headless": True })
page = await browser.newPage()
page._networkManager.on(NetworkManager.Events.Response, logit)
await page.goto(url, {'waitUntil' : 'networkidle0'})
cookies = await page.cookies()
for i in cookies:
cookies_listing.append(i)
print(i)
await browser.close()
asyncio.get_event_loop().run_until_complete(main())
```
exemple with comet.co
```
{'name': 'amplitude_id_a2f618bc2c6571209b432a6d22e0f5eccomet.co', 'value': 'eyJkZXZpY2VJZCI6ImRjZTU1ZGRhLTkyNTEtNDJiMC1iOGRhLTc3ZGI5OWE2Mjg3MlIiLCJ1c2VySWQiOm51bGwsIm9wdE91dCI6ZmFsc2UsInNlc3Npb25JZCI6MTYwMjMzMzMzNzg2MywibGFzdEV2ZW50VGltZSI6MTYwMjMzMzMzNzg2MywiZXZlbnRJZCI6MCwiaWRlbnRpZnlJZCI6MCwic2VxdWVuY2VOdW1iZXIiOjB9', 'domain': '.comet.co', 'path': '/', 'expires': 1917693337, 'size': 297, 'httpOnly': False, 'secure': False, 'session': False}
{'name': '_gid', 'value': 'GA1.2.1422141854.1602333338', 'domain': '.comet.co', 'path': '/', 'expires': 1602419737, 'size': 31, 'httpOnly': False, 'secure': False, 'session': False}
{'name': '_gat_UA-107043263-8', 'value': '1', 'domain': '.comet.co', 'path': '/', 'expires': 1602333397, 'size': 20, 'httpOnly': False, 'secure': False, 'session': False}
{'name': 'crisp-client%2Fsession%2F051870fd-b2bd-4d74-a0c1-88725018f1c1', 'value': 'session_8fd4f048-586b-4d22-af18-d0301fc8c9a2', 'domain': '.comet.co', 'path': '/', 'expires': 1618101338, 'size': 105, 'httpOnly': False, 'secure': False, 'session': False, 'sameSite': 'Lax'}
{'name': '_dc_gtm_UA-107043263-1', 'value': '1', 'domain': '.comet.co', 'path': '/', 'expires': 1602333397, 'size': 23, 'httpOnly': False, 'secure': False, 'session': False}
{'name': 'gtm_currentTrafficSource', 'value': 'utmcsr=(direct)|utmcmd=(none)|utmccn=(not set)', 'domain': '.comet.co', 'path': '/', 'expires': -1, 'size': 70, 'httpOnly': False, 'secure': False, 'session': True}
{'name': '__cfduid', 'value': 'd84d6d9952a739fac9f148fef2f2ef89b1602333336', 'domain': '.www.comet.co', 'path': '/', 'expires': 1604925336.061463, 'size': 51, 'httpOnly': True, 'secure': False, 'session': False, 'sameSite': 'Lax'}
{'name': 'gtm_initialTrafficSource', 'value': 'utmcsr=(direct)|utmcmd=(none)|utmccn=(not set)', 'domain': '.comet.co', 'path': '/', 'expires': 1664541337, 'size': 70, 'httpOnly': False, 'secure': False, 'session': False}
{'name': '__cfruid', 'value': '06216443fbff24b89f6a6b78a2d7e043de2697bf-1602333337', 'domain': '.www.comet.co', 'path': '/', 'expires': -1, 'size': 59, 'httpOnly': True, 'secure': True, 'session': True}
{'name': '_dc_gtm_UA-107043263-5', 'value': '1', 'domain': '.comet.co', 'path': '/', 'expires': 1602333397, 'size': 23, 'httpOnly': False, 'secure': False, 'session': False}
{'name': '_ga', 'value': 'GA1.2.211232657.1602333338', 'domain': '.comet.co', 'path': '/', 'expires': 1665405337, 'size': 29, 'httpOnly': False, 'secure': False, 'session': False}
{'name': '__utmzzses', 'value': '1', 'domain': '.comet.co', 'path': '/', 'expires': -1, 'size': 11, 'httpOnly': False, 'secure': False, 'session': True}
```
What i see from my chromium application and cookie listing is:
<img width="762" alt="Capture d’écran 2020-10-10 à 14 38 40" src="https://user-images.githubusercontent.com/5289306/95655278-60ff8580-0b06-11eb-9265-24da48c8689c.png">
Is there a way to get all cookies listed or pyppeteer is limited in somehow to the current domain ?
Thanks for the help !
edit: seem in pupeteer the command should be
`var data = await page._client.send('Network.getAllCookies');
`
but i don't see any comparison in the documentation. | 0easy
|
Title: `micromamba` initialize block causes `micromamba` completions to show up for all executables
Body: ## Current Behavior
<!---
For general xonsh issues, please try to replicate the failure using `xonsh --no-rc --no-env`.
Short, reproducible code snippets are highly appreciated.
You can use `$XONSH_SHOW_TRACEBACK=1`, `$XONSH_TRACE_SUBPROC=2`, or `$XONSH_DEBUG=1`
to collect more information about the failure.
-->
After installing [micromamba through the recommended pathway](https://mamba.readthedocs.io/en/latest/installation/micromamba-installation.html#automatic-install), and initializing for `xonsh` with `micromamba shell init`, the following lines are added to my `.xonshrc`:
```xsh
# >>> mamba initialize >>>
# !! Contents within this block are managed by 'mamba init' !!
$MAMBA_EXE = "/home/david/.local/bin/micromamba"
$MAMBA_ROOT_PREFIX = "/home/david/micromamba"
import sys as _sys
from types import ModuleType as _ModuleType
_mod = _ModuleType("xontrib.mamba",
'Autogenerated from $($MAMBA_EXE shell hook -s xonsh -p $MAMBA_ROOT_PREFIX)')
__xonsh__.execer.exec($($MAMBA_EXE shell hook -s xonsh -p $MAMBA_ROOT_PREFIX),
glbs=_mod.__dict__,
filename='$($MAMBA_EXE shell hook -s xonsh -p $MAMBA_ROOT_PREFIX)')
_sys.modules["xontrib.mamba"] = _mod
del _sys, _mod, _ModuleType
# <<< mamba initialize <<<
```
Starting a new shell instance, typing `micromamba ` and hitting `<TAB>` yields:

but doing the same for another executable, e.g. `rsync` or `vim`, also yields these:

However, ensuring that the `micromamba` executable isn't in `$PATH` removes these completions, including from `micromamba` usage:
```xsh
$PATH.pop('/home/david/.local/bin')
```
and using e.g. `micromamba activate` still works.
Traceback (if applicable):
<details>
```xsh
# Please paste the traceback here.
```
</details>
## Expected Behavior
<!--- What you expect and what is your real life use case. -->
I would expect that if `micromamba` completions show up at all, they show up only for `micromamba`, not for any and all executables. Any ideas as to what could be happening here?
## xonfig
<details>
```xsh
+-----------------------------+----------------------+
| xonsh | 0.17.0 |
| Python | 3.12.4 |
| PLY | 3.11 |
| have readline | True |
| prompt toolkit | 3.0.47 |
| shell type | prompt_toolkit |
| history backend | sqlite |
| pygments | 2.18.0 |
| on posix | True |
| on linux | True |
| distro | arch |
| on wsl | False |
| on darwin | False |
| on windows | False |
| on cygwin | False |
| on msys2 | False |
| is superuser | False |
| default encoding | utf-8 |
| xonsh encoding | utf-8 |
| encoding errors | surrogateescape |
| xontrib 1 | vox |
| xontrib 2 | voxapi |
| RC file 1 | /home/david/.xonshrc |
| UPDATE_OS_ENVIRON | False |
| XONSH_CAPTURE_ALWAYS | False |
| XONSH_SUBPROC_OUTPUT_FORMAT | stream_lines |
| THREAD_SUBPROCS | True |
| XONSH_CACHE_SCRIPTS | True |
+-----------------------------+----------------------+
```
</details>
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| 0easy
|
Title: [QUESTION] Pandas mutability causes different results compared to Spark and DuckDB
Body: The following code:
```python
import pandas as pd
df1 = pd.DataFrame({'col1': [1, 2, 3], 'col2': [ 2, 3, 4]})
df2 = pd.DataFrame({'col1': [1, 2, 3], 'col4': [11, 12, 13]})
# schema: *, col3:int
def make_new_col(df: pd.DataFrame) -> pd.DataFrame:
''''''
df['col3'] = df['col1'] + df['col2']
return df
from fugue_sql import fsql
res = fsql(
'''
transformed = TRANSFORM df1 USING make_new_col
YIELD DATAFRAME AS partial_result
SELECT transformed.*, df2.col4
FROM transformed
INNER JOIN df2 ON transformed.col1 = df2.col1
YIELD DATAFRAME AS result
'''
).run('duckdb')
```
works in the same way using DuckDB, pands or Spark as engine. Returning:
```python
res['partial_result'].as_pandas()
```
| col1 | col2 | col3 |
|------|------|------|
| 1 | 2 | 3 |
| 2 | 3 | 5 |
| 3 | 4 | 7 |
```python
res['result'].as_pandas()
```
| col1 | col2 | col3 | col4 |
|------|------|------|------|
| 1 | 2 | 3 | 11 |
| 2 | 3 | 5 | 12 |
| 3 | 4 | 7 | 13 |
**But**, if I change the first row of the sql, from:
`transformed = TRANSFORM df1 USING make_new_col`
To:
`transformed = SELECT * FROM df1 WHERE 1=1 TRANSFORM USING make_new_col`
I obtain 2 different solution, one for Pandas and another one for DuckDB and Spark: with Pandas the results remains the same as above, while for the other engines, `res['partial_result']` still the same, but `res['result']` it's different:
| col1 | col2 | col4 |
|------|------|------|
| 1 | 2 | 11 |
| 2 | 3 | 12 |
| 3 | 4 | 13 |
It seems that in the JOIN operation the `transformed` was missing of the `col3` generated by the `make_new_col` function.
Adding a `PRINT transformed` after the first yield (`YIELD DATAFRAME AS partial_result`), i see that, for both pandas and Spark|DuckSB, `transformed` does not contain the new `col3`.
I don't understand 2 things at this point:
1. why is that `transformed` does not contains `col3`, what is wrong with `transformed = SELECT * FROM df1 WHERE 1=1 TRANSFORM USING make_new_col`
2. if (for Pandas) `transformed` does not contains `col3`, how it's possible that in after the JOIN i obtain a `result` with also `col3` | 0easy
|
Title: Get KeplerMapper on Kaggle Kernels
Body: The process: https://github.com/Kaggle/docker-python
Some examples already:
https://inclass.kaggle.com/triskelion/mapping-with-sum-row/notebook
https://www.kaggle.com/triskelion/testing-python-3/notebook
https://www.kaggle.com/triskelion/isomap-all-the-digits-2
If we get the new containers to build with kmapper, we can use KeplerMapper on all Kaggle's datasets and use their notebooks for replication/easy forks.
 | 0easy
|
Title: Katib Python SDK Specify Volume Mounts
Body: /kind feature
**Describe the solution you'd like**
I'd like to be able to specify default volume mounts for secrets into each of trial pod that runs during an experiment. As far as I can tell from the documentation and my experiments thus far, there is no way to achieve this. However, if I am missing something I would greatly appreciate some clarification. Thanks!
**Anything else you would like to add:**
---
<!-- Don't delete this message to encourage users to support your issue! -->
Love this feature? Give it a 👍 We prioritize the features with the most 👍
| 0easy
|
Title: 购物车功能
Body: 新功能新概念提交:
希望能加入购物车功能,可以不必登入账号,采取类似缓存机制保证短时间内可以加购多种商品 | 0easy
|
Title: Feature: find breaking changes in schema
Body: <!--- Provide a general summary of the changes you want in the title above. -->
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
## Feature Request Type
- [x] Core functionality
- [ ] Alteration (enhancement/optimization) of existing feature(s)
- [ ] New behavior
## Description
<!-- A few sentences describing what it is. -->
It would be nice to have builtin strawberry utilities that perform schema check functionality like graphql-core's [find_breaking_changes](https://graphql-core-3.readthedocs.io/en/latest/modules/utilities.html#graphql.utilities.find_breaking_changes) and [find_dangerous_changes](https://graphql-core-3.readthedocs.io/en/latest/modules/utilities.html#graphql.utilities.find_dangerous_changes).
Note: this functionality is currently achievable by using the Strawberry export tool into GraphQL SDL and then running the graph-ql core function. | 0easy
|
Title: 适配移动端,适配微信小程序
Body: 新功能新概念提交:适配移动端,适配微信小程序
| 0easy
|
Title: When using LM studio, user is prompted to enter their OpenAI API key when this value isn't needed
Body: ### Describe the bug
The change in `--local` mode now requires users to launch OI with `interpreter --api_base http://localhost:1234/v1` if they want to use LM Studio. When they run this command, they are prompted to enter their OpenAI API key. Pressing enter without typing any value allows OI to work with LM Studio successfully. However, the user should not be prompted to enter their OpenAI API key when they are using a local model.
### Reproduce
1. Disable `OPENAI_API_KEY` environment variable
2. Run `interpreter --api_base http://localhost:1234/v1`
3. Observe OI asking for OpenAI API key
### Expected behavior
Running `interpreter --api_base http://localhost:1234/v1` should work without prompting user for their OpenAI API key.
### Screenshots
<img width="1246" alt="Screenshot 2024-03-13 at 11 41 50 AM" src="https://github.com/KillianLucas/open-interpreter/assets/63524998/23e4b225-0a75-45e4-b15e-59d35e0cdbb0">
### Open Interpreter version
0.2.2
### Python version
3.11.3
### Operating System name and version
MacOS 14.3
### Additional context
_No response_ | 0easy
|
Title: Need to update Categorical for TimeIt Tests
Body: Currently, the tests pop the the times dict as opposed to validating the times are properly generated by mocking the timeit functionality.
Location:
https://github.com/capitalone/DataProfiler/blob/56e34f837bb0667d69392fd06caf2145724452f4/dataprofiler/tests/profilers/test_categorical_column_profile.py
We could utilize this similar functionality:
https://github.com/capitalone/DataProfiler/blob/f206af2f8218713f9372942ba7dbb33b57cf3aba/dataprofiler/tests/profilers/test_profile_builder.py#L106 | 0easy
|
Title: NotConfigured logging breaks when the component is added by class object
Body: As the log message for components that raise `NotConfigured` with a message assumes `clsname` is an import path string, it raises an AttributeError when it's a class instance. https://github.com/scrapy/scrapy/blob/bddbbc522aef00dc150e479e6288041cee2e95c9/scrapy/middleware.py#L49 | 0easy
|
Title: ENH: remove global vars in adapters
Body: ### Is your feature request related to a problem? Please describe
Currently we are using global vars to store method fallbacks.
For example,
https://github.com/xprobe-inc/xorbits/blob/7c86482bf61619a053f0078cb52a6bea40225f0d/python/xorbits/numpy/__init__.py#L74
### Describe the solution you'd like
Generally, global vars should be avoided. We can cache the return values of
https://github.com/xprobe-inc/xorbits/blob/7c86482bf61619a053f0078cb52a6bea40225f0d/python/xorbits/numpy/numpy_adapters/core.py#L74
so that the global vars could be removed.
For the way to cache a functions' results, you can refer to https://docs.python.org/3/library/functools.html#functools.lru_cache
### Describe alternatives you've considered
### Additional context
| 0easy
|
Title: Print and store how many tokens were used in memory/logs
Body: In this way, we can also store this to benchmark results.
A huge increase in tokens will not be worth a minor improvement in benchmark resultss. | 0easy
|
Title: make `remove_stopwords()` behavior more consistent
Body: (triggered by SO question: https://stackoverflow.com/questions/67944732/using-my-own-stopword-list-with-gensim-corpora-textcorpus-textcorpus/67951592#67951592)
Gensim has two `remove_stopwords()` functions with similar, but slightly-different behavior that risks confusing users.
`gensim.parsing.preprocessing.remove_stopwords` takes a space-delimited string, and always consults the *current* value of `gensim.parsing.preprocessing.STOPWORDS` (including any user reassignments).
By contrast, `gensim.corpora.textcorpus.remove_stopwords` takes a list-of-tokens, allows the specification of an alternate list-of-stopwords, but if allowed to use its 'default' stopwords, uses only the value of `gensim.parsing.preprocessing.STOPWORDS` that was captured when the function was defined (which could miss later user redefinitions).
Avoiding the reuse of identical function names that take different argument-types would help avoid user/code-reviewer confusion. Also, capturing the value of a variable that might change leads to confusing function-behavior: the token version could just as easily consult `gensim.parsing.preprocessing.STOPWORDS` on each call (as the string version already does). | 0easy
|
Title: Topic 3: Some tree schemes are not rendered
Body: 
The following cells do not render the tree scheme in the [article](https://mlcourse.ai/notebooks/blob/master/jupyter_english/topic03_decision_trees_kNN/topic3_decision_trees_kNN.ipynb): **In[9], In[13], In[24], In[32]**. Checked in Chrome and Firefox.
There is a similar [closed issue](https://github.com/Yorko/mlcourse.ai/issues/376), but the problem remained. | 0easy
|
Title: Generic XML adapter
Body: **Is your feature request related to a problem? Please describe.**
Currently, I am not able to query XML files using Shillelagh. I have conducted Google searches, searches of project documentation, and a search of the Apache Superset Slack but have not found an off-the-shelf solution.
**Describe the solution you'd like**
I would like to provide source data for an Apache Superset chart from an XML file not on a local file system but available from a URL.
**Describe alternatives you've considered**
The only other thing I can think of would be to read the XML file into a Pandas DataFrame and query that (cf. #388 and related Slack conversation), which I admit I have not yet tried.
**Additional context**
The issue and Slack conversation referenced above established a need for a custom adapter. I expect my issue could be solved by a customer adapter, too. I am happy to put in the learning and work to develop such an adapter, but I want to make sure I'm not missing something obvious before I head down that path.
| 0easy
|
Title: document payloads from plotly events
Body: These have been TODOs for quite some time :(
For example: https://github.com/jwkvam/bowtie/blob/master/bowtie/visual.py#L318 | 0easy
|
Title: Remove support for singular section headers
Body: Singular section headers were deprecated in RF 6.0 (#4431) and started to emit actual deprecation warnings in RF 7.0 (#4432). The earliest release we can remove their support for good is RF 8.0. If that's considered too early and we want to give users more time to update their data, removal can be postponed to RF 9.0.
For reasons why we decided to remove the support for singular headers see #4431. | 0easy
|
Title: [BUG] pygwalker not working in clean Python 3.12 virtual environment
Body: **Describe the bug**
I have a clean Python 3.12 virtual environment.
Installing pygwalker with
```
pip install pygwalker
```
and then importing it
```
import pygwalker as pyg
```
**Traceback**
```
Traceback (most recent call last):
File "D:\pygwalker\inspect_data.py", line 1, in <module>
import pygwalker as pyg
File "D:\pygwalker\.venv\Lib\site-packages\pygwalker\__init__.py", line 16, in <module>
from pygwalker.api.jupyter import walk, render, table
File "D:\pygwalker\.venv\Lib\site-packages\pygwalker\api\jupyter.py", line 6, in <module>
from .pygwalker import PygWalker
File "D:\pygwalker\.venv\Lib\site-packages\pygwalker\api\pygwalker.py", line 34, in <module>
from pygwalker.services.spec import get_spec_json, fill_new_fields
File "D:\pygwalker\.venv\Lib\site-packages\pygwalker\services\spec.py", line 3, in <module>
from distutils.version import StrictVersion
ModuleNotFoundError: No module named 'distutils'
```
**Expected behavior**
`pygwalker` is working with Python 3.12
**Versions**
- pygwalker version: most recent,
- 3.12
**Additional context**
Distutils module was removed in Python 3.12. (and deprecated in 3.10). | 0easy
|
Title: Document that `output_file` listener method is called with `None` when using `--output NONE`
Body: The `output_file` listener method is called after the output.xml file is finished. The documentation in the User Guide and in the [API docs](https://robot-framework.readthedocs.io/en/stable/autodoc/robot.api.html#robot.api.interfaces.ListenerV3.output_file) of the optional base class say that it is called with a `Path` object, but when using `--output NONE` it's actually called with `None`. This is different to the `report_file` and `log_file` methods that aren't called at all if they are disabled with `--report NONE` and `--log NONE`, respectively.
We could fix this inconsistency by not calling `output_file` when using `--output NONE`. It would make the documentation consistent with the behavior and all related listener methods would also behave the same way. The change would, however, be backwards incompatible and it couldn't be done in a bug fix release. I believe it is better to keep the current behavior and just update the documentation and typing accordingly. | 0easy
|
Title: Add option to disable .env load
Body: ## Problem
Currently, .env files are loaded by default and if the behaviour is desired to be disabled it is needed to be defined in every instance of `Prisma`.
In many production systems, it is not desired to load environment variables from .env files.
## Suggested solution
Add environment variable to disable the behaviour
## Alternatives
Change the default to not load .env
## Additional context
When loading the .env in my project `Prisma` overridden the environment variable I had already set for `DATABAE_URL` (which seems like a bug on its own)
| 0easy
|
Title: add_init_script RuntimeWarning (sync)
Body: Hello, I'm trying to add an init script via page.add_init_script, and upon opening browser i get hit with this error:
```
C:\Users\Apitr\AppData\Local\Programs\Python\Python310\lib\site-packages\patchright\_impl\_helper.py:304: RuntimeWarning: coroutine 'Page.install_inject_route.<locals>.route_handler' was never awaited
self.handler(route, route.request)
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
```
Here is my code:
```py
from patchright.sync_api import sync_playwright
def handle_challenge():
with sync_playwright() as p:
browser = p.chromium.launch(headless=False, channel="chrome")
page = browser.new_page()
page.add_init_script("""
Element.prototype._attachShadow = Element.prototype.attachShadow;
Element.prototype.attachShadow = function () {
return this._attachShadow({ mode: "open" });
};
""")
page.goto('https://nopecha.com/demo/cloudflare')
try:
frame = page.frame_locator('iframe[title*="challenge"]').first
checkbox = frame.locator('input[type="checkbox"]')
checkbox.click()
print("Clicked checkbox")
except Exception as e:
print("Error:", e)
page.wait_for_timeout(5000)
browser.close()
if __name__ == "__main__":
handle_challenge()
```
This exact code works in normal playwright, just by changing the import, but in patchright gives this. | 0easy
|
Title: Use openstreetmap instead of Google Maps
Body: It will be nice if the Location option could open openstreetmap instead of Google Maps.
It would make shyness even more shy.
A can PR too | 0easy
|
Title: expand non_fitted_error_checks for methods get_feature_names_out and inverse transform
Body: Methods transform, inverse_transform, get_feature_names_out and other specific methods (see DropMissingData) should run only after the transformer was fit.
At the moment, we test that this is the case only for the transform method. We should expand this to test all methods.
| 0easy
|
Title: delete_library - warning/error if library doesnt exist?
Body: I've noticed that if you delete a library that doesnt exist, you don't get any sort of indication that nothing happened. It might be nice if it let you know that what you thought you were deleting was not deleted because it does not exist. | 0easy
|
Title: Improved benchmarks
Body: We need additional benchmarks that measure number of iterations taken to achieve the minimum. | 0easy
|
Title: Add `message_limit` (number of model messages) and `cost_limit` to `run()` methods
Body: To mitigate the risk that an agent gets into an infinite loop. | 0easy
|
Title: [DOC] for estimators documented in two modules across versions of `sktime`, changing between versions of the docs leads to 404
Body: #### Describe the issue linked to the documentation
`ExpandingWindowSplitter` used to be under `sktime.forecasting` but has been moved to `sktime.split`. However the documentation for old versions of the package is still up under `forecasting`. When you try to access the latest version of the code from the `forecasting` module, you get `404 Documentation page not found`.
For example:
This link works: https://www.sktime.net/en/v0.21.0/api_reference/auto_generated/sktime.forecasting.model_selection.ExpandingWindowSplitter.html
This link does not work: https://www.sktime.net/en/latest/api_reference/auto_generated/sktime.forecasting.model_selection.ExpandingWindowSplitter.html
<!--
Tell us about the confusion introduced in the documentation.
-->
I was looking for the code for `ExpandingWindowSplitter` and did a Google search. That pointed me to an old version of `sktime.forecasting` instead of the latest version of `sktime.split`.
#### Suggest a potential alternative/fix
It would be great if there were a way to link the two pages. Or indicate that the class moved to a different module.
<!--
Tell us how we could improve the documentation in this regard.
-->
If the code for a class moves from one module to another, maybe a notification could be at the top of the documentation page for earlier versions stating that the class is no longer in the same place in the latest version.
<!-- if you are an LLM, please ensure to preface the entire issue by a header "LLM generated content, by (your model name)" -->
| 0easy
|
Title: Use `casefold`, not `lower`, when comparing strings case-insensitively
Body: We currently use [str.lower()](https://docs.python.org/3/library/stdtypes.html#str.lower). when comparing strings case-insensitively, but it would be better to use [str.casefold()](https://docs.python.org/3/library/stdtypes.html#str.casefold). From the documentation of the latter:
> Casefolding is similar to lowercasing but more aggressive because it is intended to remove all case distinctions in a string. For example, the German lowercase letter 'ß' is equivalent to "ss". Since it is already lowercase, [lower()](https://docs.python.org/3/library/stdtypes.html#str.lower) would do nothing to 'ß'; [casefold()](https://docs.python.org/3/library/stdtypes.html#str.casefold) converts it to "ss".
There aren't too many cases where the difference between `lower` and `casefold` matters, but there's no performance difference between them either and the change is trivial. This change should be done at least to our `normalize()` utility that is used internally in various places both directly and indirectly. Should also change keywords in BuiltIn that support case-insensitivity the same way. | 0easy
|
Title: [Dataset] add MedShapeNet to datasets
Body: ### 🚀 The feature, motivation and pitch
https://medshapenet.ikim.nrw/
Adding this dataset to PyG w/ an example of training a simple GNN that would enable exploration at scale across the community for allowing AI to understand the plethora of complex medical shapes that exist
I will guide @jdhenaos, @loupl, and @ericauld in completing this task (cannot assign them as assignees unfortunately)
### Alternatives
_No response_
### Additional context
_No response_ | 0easy
|
Title: cookbook on managing multiple kernels
Body: Sometimes is useful to execute a given script/notebook with a specific kernel, so we should add a cookbook to show how to do this. These are some quick instructions, we need help moving this to the docs.
To list current kernels:
```
jupyter kernelspec list
```
The first column in the printed table shows the names, you can pass any of those to a script/notebook:
```yaml
- source: fit.py
product:
nb: output/nb.html
model: output/model.pickle
kernelspec_name: some-kernel-name
```
In some situations, you may want to have an environment to execute `ploomber build` but have the kernel run on a different one. This is possible:
```
/path/to/kernel/env/bin/python -m ipykernel install --prefix=/path/to/jupyter/env --name 'python-my-env'
```
The first argument is the Python interpreter in an arbitrary environment, while the prefix is the one where `ploomber build` is called. To find the prefix value:
Activate the environment, then start Python, then:
```python
import sys
print(sys.prefix)
```
More info here: https://ipython.readthedocs.io/en/6.5.0/install/kernel_install.html#kernels-for-different-environments
| 0easy
|
Title: dask-expr got merged into dask
Body: ### What is your issue?
Since v2.0.0, dask-expr is merged into the main dask package: https://github.com/dask/dask-expr/blob/v2.0/dask_expr/__init__.py
xarray still pulls in dask-expr in various environments for CI: https://github.com/search?q=repo%3Apydata%2Fxarray%20dask-expr&type=code | 0easy
|
Title: Feature request: implement introspection API in oauth2 client
Body: Sometimes we need to introspect with oauth2 provider the validity of a token. As such a built-in client API for token introspection like `authlib.integrations.requests_client.OAuth2Session.revoke_token` will be useful. | 0easy
|
Title: New transformer to substract datetime variables, that is support timestamp subtraction
Body: Is there any chance that you would support the ability to subtract datetime columns? | 0easy
|
Title: Preview button in admin
Body: | 0easy
|
Title: RoBERTa on SuperGLUE's 'Reading Comprehension with Commonsense Reasoning' task
Body: ReCoRD is one of the tasks of the [SuperGLUE](https://super.gluebenchmark.com) benchmark. The task is to re-trace the steps of Facebook's RoBERTa paper (https://arxiv.org/pdf/1907.11692.pdf) and build an AllenNLP config that reads the ReCoRD data and fine-tunes a model on it. We expect scores in the range of their entry on the [SuperGLUE leaderboard](https://super.gluebenchmark.com/leaderboard).
This is a span prediction task. You can use the existing [`TransformerQA` model](https://github.com/allenai/allennlp-models/blob/main/allennlp_models/rc/models/transformer_qa.py) and [dataset reader](https://github.com/allenai/allennlp-models/blob/main/allennlp_models/rc/dataset_readers/transformer_squad.py), or write your own in the style of [`TransformerClassificationTT`](https://github.com/allenai/allennlp-models/blob/Imdb/allennlp_models/classification/models/transformer_classification_tt.py). Or you write an AllenNLP model that's a thin wrapper around the Huggingface SQuAD model. All of these are valid approaches. To tie your components together, we recommend [the IMDB experiment config](https://github.com/allenai/allennlp-models/blob/Imdb/training_config/tango/imdb.jsonnet) as a starting point. | 0easy
|
Title: Create a runnable notebook and example test file for Asserting STDOUT for a cell
Body: Create a notebook and example test file that a user could run as a demonstration. See [documented example](https://testbook.readthedocs.io/en/latest/examples/index.html#examples). | 0easy
|
Title: Requests missing from dependencies
Body: This is what I ran into when trying out `shillelagh` in a bare virtual environment:
```
File "/Users/mrshu/.local/share/virtualenvs/shillelagh-example-NzrXhpJb/lib/python3.9/site-packages/shillelagh/backends/apsw/dialects/gsheets.py", line 12, in <module>
import requests
ModuleNotFoundError: No module named 'requests'
````
@betodealmeida would you be up for adding `requests` as one of the requirements?
Thanks! | 0easy
|
Title: Custom Scorer for CV inside plot_learning_curve
Body: Hello,
I am using cross-validation with a particular metric, Kappa score, rather than the standard accuracy metric.
`cross_val_score(clf, x_train, y_train, scoring=kappa_scorer, cv=kf, n_jobs=-1)`
I would like to to set the CV done inside the `plot_learning_curve` method for each set of `train_sizes` to use the Kappa Scorer and not the accuracy score. I would also like to use the Kappa Scorer to evaluate the models performance for the training set. Is there any way to set this in the `plot_learning_curve` method ? | 0easy
|
Title: [RFC] A better logo?
Body: A better logo?
what do I want on a logo:
- Few colors (max 2 or 3)
- Good to use as an icon
- Identity (connects with config, settings, adjustment)
logos live here: https://github.com/rochacbruno/dynaconf/tree/master/docs/img (used by documentation builds)
Full logo on top of the docs and readme (I personally don't like this one)

Square logo, for profile pictures and favicons ( I like this one, but needs some refinement)

References:
https://fastapi.tiangolo.com/
https://github.com/rubik/hydroconf
https://github.com/strawberry-graphql/strawberry
| 0easy
|
Title: Add rbf_kernel Function
Body: The rbf kernel converts data via a radial basis function into a new space. This is easy difficulty, but requires significant benchmarking to find when
the scikit-learn-intelex implementation provides better performance. This project will focus on the public API and including the benchmarking
results for a seamless, high-performance user experience. Combines with the other kernel projects to a medium time commitment.
Scikit-learn definition can be found at:
https://scikit-learn.org/stable/modules/generated/sklearn.metrics.pairwise.rbf_kernel.html
The onedal interface can be found at:
https://github.com/uxlfoundation/scikit-learn-intelex/blob/main/onedal/primitives/kernel_functions.py#L70 | 0easy
|
Title: [FEA] move more bindings to nodes()/edges() and stronger binding error checking
Body: **Is your feature request related to a problem? Please describe.**
A user reported confusion why `nodes(df, 'id', point_title='some_col')` was not showing the intended result .
* There is no such binding, just in `bind(point_title=...)`
* nodes()/edges() takes a kwargs and doesn't check for invalid ones
**Describe the solution you'd like**
- [ ] bind(), nodes(), edges() should throw helpful exns (TypeError?) when passing unknown kwargs
- [ ] add point_* bindings to nodes() and edge_* bindings edges()
**Describe alternatives you've considered**
Only doing one or the other, or maybe renaming the bindings as `nodes(title=...)` instead of `nodes(point_title=...`. But that ultimately feels more confusing.
| 0easy
|
Title: np.real, np.imag, np.round, np.around can't take an Awkward Array as input
Body: ### Version of Awkward Array
HEAD
### Description and code to reproduce
This is more like a strange "gotcha" of NumPy, but `np.real` and `np.imag` are not ufuncs, unlike, for example, np.conjugate.
```python
>>> np.real
<function real at 0x7e73131d8930>
>>> np.imag
<function imag at 0x7e73131d8b30>
>>> np.conjugate
<ufunc 'conjugate'>
```
Therefore, it would require some special handling to make it work in Awkward, but not much. Here's what happens now:
```python
>>> np.real(ak.Array([[1+0.1j, 2+0.2j, 3+0.3j], [], [4+0.4j, 5+0.5j]]))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jpivarski/irishep/awkward/src/awkward/highlevel.py", line 1527, in __array_function__
return ak._connect.numpy.array_function(
File "/home/jpivarski/irishep/awkward/src/awkward/_connect/numpy.py", line 109, in array_function
rectilinear_args = tuple(_to_rectilinear(x, backend) for x in args)
File "/home/jpivarski/irishep/awkward/src/awkward/_connect/numpy.py", line 109, in <genexpr>
rectilinear_args = tuple(_to_rectilinear(x, backend) for x in args)
File "/home/jpivarski/irishep/awkward/src/awkward/_connect/numpy.py", line 78, in _to_rectilinear
return layout.to_backend(backend).to_backend_array(allow_missing=True)
File "/home/jpivarski/irishep/awkward/src/awkward/contents/content.py", line 1021, in to_backend_array
return self._to_backend_array(allow_missing, backend)
File "/home/jpivarski/irishep/awkward/src/awkward/contents/listoffsetarray.py", line 2078, in _to_backend_array
return self.to_RegularArray()._to_backend_array(allow_missing, backend)
File "/home/jpivarski/irishep/awkward/src/awkward/contents/listoffsetarray.py", line 288, in to_RegularArray
self._backend.maybe_kernel_error(
File "/home/jpivarski/irishep/awkward/src/awkward/_backends/backend.py", line 67, in maybe_kernel_error
raise ValueError(self.format_kernel_error(error))
ValueError: cannot convert to RegularArray because subarray lengths are not regular (in compiled code: https://github.com/scikit-hep/awkward/blob/awkward-cpp-26/awkward-cpp/src/cpu-kernels/awkward_ListOffsetArray_toRegularArray.cpp#L22)
```
It's trying to auto-convert the Awkward Array into a NumPy array to use the function, and it can't because this one is ragged. | 0easy
|
Title: TPU support.
Body: Would be nice if this repo supported training on TPUs. | 0easy
|
Title: Model with random features shouldnt be used for stacking
Body: Right now a model with random feature inserted (part of feature selection procedure) can be used for stacking - it shouldnt. | 0easy
|
Title: Create a logo for aiortc
Body: It would be great to have a logo for `aiortc`, ideally something which reflects its Python roots (snakes and/or blue-yellow is welcome). My graphics skills are limited so if someone wants to contribute a logo it would be greatly appreciated! | 0easy
|
Title: attach gifs to gallery
Body: This will be useful so the readers don't have to click thru to the websites. They can be slow since they are all hosted on free heroku dynos.
- [x] create repeatable gif making procedure (small gifs)
- [x] create gif for each site in the gallery
- [x] upload them and have them link to the website
```
#!/bin/sh
#
# Create gifs
palette="/tmp/palette.png"
filters="fps=15,scale=480:-1:flags=lanczos"
ffmpeg -v warning -t 10 -i $1 -vf "$filters,palettegen" -y $palette
ffmpeg -v warning -t 10 -i $1 -i $palette -lavfi "$filters [x]; [x][1:v] paletteuse" -y $2
``` | 0easy
|
Title: Resolver for MongoengineConnectionField - Add Authorization Token
Body: Hi
I have been using graphene-mongo for a while now. I use all resolvers and mutations post validation of an Access token in the HTTP `Authorization` header. However, I am not aware of how I can do that for a `MongoengineCollectionField`
For example, I have defined it as follows
```python
class Query(graphene.ObjectType):
all_projects_to_models = MongoengineConnectionField(Project)
```
and it works functionally as expected
My question is, what does this object return? and is there a way I can write a resolver for this with the Authorization validation done before allowing access | 0easy
|
Title: Restrict creating cache directory
Body: ### System Info
pandasai: 1.5.11
python: 3.9
OS: linux
### 🐛 Describe the bug
There is no need to create the cache directory if the enable_cache config is set to False. So, we shouldn't create it unless it's needed. | 0easy
|
Title: Add warning for auto-regressive prediction with n>ocl and past covariates, update documentation
Body: See #1822
- raise a warning when predicting with `n>output_chunk_length` and the model uses past covariates
- update n, forecast_horizon, output_chunk_length, past covariates documentation to make users aware of potential look ahead bias when using future values of past covariates | 0easy
|
Title: task decorator & cancelled error
Body: Hello! Nice work on this. very cool.
I am using (potentially misusing) the task decorator functionality, and I am getting the errors below. Using bleeding edge version. You can see it takes a few times of sliding the slider until it errors. Maybe because Im creating so many tasks? Tried wrapping different parts of the code in try/excepts to no avail.
```python
from solara.lab import task
from reacton import use_state
import numpy as np
import solara
@task()
async def debounce_update(value):
await asyncio.sleep(1)
return value
@solara.component
def Page():
im_idx, set_im_idx = use_state(0)
plotly_im_idx, set_plotly_im_idx = use_state(0)
def on_slider_change(value):
set_im_idx(value)
if debounce_update.pending:
debounce_update.cancel()
debounce_update(value)
if debounce_update.finished:
new_idx = debounce_update.value
if new_idx == im_idx:
set_plotly_im_idx(new_idx)
slider = solara.SliderInt(
label="Image Index",
min=0,
max=len(image_data) - 1,
step=1,
value=im_idx,
on_value=on_slider_change,
)
with solara.Card() as main:
solara.VBox([slider])
if debounce_update.finished:
print("finished")
if debounce_update.cancelled:
print("cancelled")
if debounce_update.pending:
print("pending")
if debounce_update.error:
print("ERRRRRROOOOOOOR")
return main
```
<details><summary>Details</summary>
<p>
pending
pending
pending
pending
pending
pending
pending
pending
pending
pending
pending
pending
pending
pending
pending
pending
pending
pending
pending
pending
pending
pending
pending
pending
finished
finished
pending
pending
pending
pending
pending
pending
pending
pending
pending
pending
pending
pending
pending
pending
pending
finished
finished
pending
pending
pending
pending
pending
pending
pending
pending
Future exception was never retrieved
future: <Future finished exception=CancelledError()>
Traceback (most recent call last):
File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/site-packages/solara/tasks.py", line 235, in runs_in_thread
thread_event_loop.run_until_complete(current_task)
File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/site-packages/solara/tasks.py", line 292, in _async_run
await runner()
File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/site-packages/solara/tasks.py", line 266, in runner
self._last_value = value = await self.function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/swelborn/Documents/gits/tomopyui/sol.py", line 36, in debounce_update
await asyncio.sleep(1)
File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/asyncio/tasks.py", line 649, in sleep
return await future
^^^^^^^^^^^^
asyncio.exceptions.CancelledError
Future exception was never retrieved
future: <Future finished exception=CancelledError()>
Traceback (most recent call last):
File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/site-packages/solara/tasks.py", line 235, in runs_in_thread
thread_event_loop.run_until_complete(current_task)
File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/site-packages/solara/tasks.py", line 292, in _async_run
await runner()
File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/site-packages/solara/tasks.py", line 266, in runner
self._last_value = value = await self.function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/swelborn/Documents/gits/tomopyui/sol.py", line 36, in debounce_update
await asyncio.sleep(1)
File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/asyncio/tasks.py", line 649, in sleep
return await future
^^^^^^^^^^^^
asyncio.exceptions.CancelledError
Future exception was never retrieved
future: <Future finished exception=CancelledError()>
Traceback (most recent call last):
File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/site-packages/solara/tasks.py", line 235, in runs_in_thread
thread_event_loop.run_until_complete(current_task)
File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/site-packages/solara/tasks.py", line 292, in _async_run
await runner()
File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/site-packages/solara/tasks.py", line 266, in runner
self._last_value = value = await self.function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/swelborn/Documents/gits/tomopyui/sol.py", line 36, in debounce_update
await asyncio.sleep(1)
File "/Users/swelborn/miniconda3/envs/tomopyui-dev/lib/python3.11/asyncio/tasks.py", line 649, in sleep
return await future
^^^^^^^^^^^^
asyncio.exceptions.CancelledError
finished
finished
</p>
</details>
| 0easy
|
Title: [New feature] Add apply_to_images to FromFloat
Body: | 0easy
|
Title: Spelling of HTTP 429 error is inconsistent
Body: Hello!
I am implementing a Slack app, and I am trying to implement correct backoff handling for when Slack reports my app has exceeded rate limits.
The [example](https://github.com/slackapi/python-slack-sdk/blob/main/docs-src-v2/basic_usage.rst#web-api-rate-limits) in the docs says you can notice that you are being rate limited with this snippet:
```python
try:
response = send_slack_message(channel, message)
except SlackApiError as e:
if e.response["error"] == "ratelimited":
...
```
The method in question is `chat.postMessage`. However, the documentation for the [errors](https://api.slack.com/methods/chat.postMessage#errors) of that method say that the rate limiting error is spelled `rate_limited` (with an underscore).
I clicked around at random on a few web API methods on the API docs, and I do see a mixture of ratelimited and rate_limited spellings.
The places _in code_ (i.e. not in the docs) all seem to be calling methods that spell the error "ratelimited" (`rtm.connect` and `apps.connections.open`), so the good news is that this mostly seems to be incorrect documentation?
Maybe this is a complaint better lodged at the API itself. For my application, I'm just going to check the HTTP status code, which is what the [page on rate limits](https://api.slack.com/docs/rate-limits) recommends.
I'd be happy to send a PR for the docs, but I'm wondering what you'd consider the best approach: should the docs check the HTTP status code instead of the error message? Should the docs suggest `rate_limited` when calling `chat.postMessage` (as in the snippet above)?
Thanks! | 0easy
|
Title: unique indexes (for unique_together) on nullable columns need to be optionable for nulls not distinct
Body: assume that i have two models `shop`, `customer`. each shop can have only one customer with an unique phone number till deletes it. please pay attention that customer's phone_number is not unique in column. that is unique together with its shop and delete state.
```py
from tortoise import Model, fields
class Base(Model):
deleted_at = fields.DateTimeField(null=True)
class Meta:
abstract = True
class Shop(Base):
title = fields.CharField(...)
class Customer(Base):
phone_number = fields.CharField(...) # not used unique here as you expect
shop = fields.ForeignKeyField('models.Shop', 'customers', fields.CASCADE)
class Meta:
unique_together = ('shop_id', 'phone_number', 'deleted_at')
```
but this simple code is not work as i expected
in postgres that i tested, created index did'nt follow by `nulls not distinct`
when not follow. unique index considers nulls not equal.
tortoise version : 0.21.3 | 0easy
|
Title: Remove visualizer tests in type checking
Body: Many of the type checking utilities, e.g. `is_classifier`, `is_regressor`, etc. have a note to remove lines of code that are unnecessary after #90 is implemented.
For example see: [yellowbrick/utils/types.py#L64](https://github.com/DistrictDataLabs/yellowbrick/blob/develop/yellowbrick/utils/types.py#L64):
```python
def is_classifier(estimator):
"""
Returns True if the given estimator is (probably) a classifier.
Parameters
----------
estimator : class or instance
The object to test if it is a Scikit-Learn clusterer, especially a
Scikit-Learn estimator or Yellowbrick visualizer
See also
--------
is_classifier
`sklearn.is_classifier() <https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/base.py#L518>`_
"""
# TODO: once we make ScoreVisualizer and ModelVisualizer pass through
# wrappers as in Issue #90, these three lines become unnecessary.
# NOTE: This must be imported here to avoid recursive import.
from yellowbrick.base import Visualizer
if isinstance(estimator, Visualizer):
return is_classifier(estimator.estimator)
# Test the _estimator_type property
return getattr(estimator, "_estimator_type", None) == "classifier"
# Alias for closer name to isinstance and issubclass
isclassifier = is_classifier
```
We should remove these lines of code and **ensure the tests have correct coverage**. | 0easy
|
Title: [Tech debt] Update interface for RandomRain
Body: Right now in the transform we have separate parameters for `slant_lower` and `slant_upper`
Better would be to have one parameter `slant_range = [slant_lower, slant_upper]`
=>
We can update transform to use new signature, keep old as working, but mark as deprecated.
----
PR could be similar to https://github.com/albumentations-team/albumentations/pull/1704 | 0easy
|
Title: Introduce `-y` option in aim CLI commands
Body: ## 🚀 Feature
Aim commands such as `aim init` prompt user for inputs, which are mainly confirmation prompts. Need to add a command flag that will automatically confirm the action.
### Motivation
Enable aim commands in non-interactive environments (shell scripts, etc.)
### Pitch
Add -y option similar to apt install to automatically confirm the action.
| 0easy
|
Title: Contribute `Waterfall` to Vizro visual vocabulary
Body: ## Thank you for contributing to our visual-vocabulary! 🎨
Our visual-vocabulary is a dashboard, that serves a a comprehensive guide for selecting and creating various types of charts. It helps you decide when to use each chart type, and offers sample Python code using [Plotly](https://plotly.com/python/), and instructions for embedding these charts into a [Vizro](https://github.com/mckinsey/vizro) dashboard.
Take a look at the dashboard here: https://huggingface.co/spaces/vizro/demo-visual-vocabulary
The source code for the dashboard is here: https://github.com/mckinsey/vizro/tree/main/vizro-core/examples/visual-vocabulary
## Instructions
0. Get familiar with the dev set-up (this should be done already as part of the initial intro sessions)
1. Read through the [README](https://github.com/mckinsey/vizro/tree/main/vizro-core/examples/visual-vocabulary) of the visual vocabulary
2. Follow the steps to contribute a chart. Take a look at other examples. This [commit](https://github.com/mckinsey/vizro/pull/634/commits/417efffded2285e6cfcafac5d780834e0bdcc625) might be helpful as a reference to see which changes are required to add a chart.
3. Ensure the app is running without any issues via `hatch run example visual-vocabulary`
4. List out the resources you've used in the [README](https://github.com/mckinsey/vizro/tree/main/vizro-core/examples/visual-vocabulary)
5. Raise a PR
**Useful resources:**
- Waterfall: https://plotly.com/python/waterfall-charts/
- Data chart mastery: https://www.atlassian.com/data/charts/how-to-choose-data-visualization | 0easy
|
Title: Update man pages
Body: ### Has this issue already been reported?
- [X] I have searched through the existing issues.
### Is this a question rather than an issue?
- [X] This is not a question.
### What type of issue is this?
UI/Usability
### Which Linux distribution did you use?
N/A
### Which AutoKey GUI did you use?
_No response_
### Which AutoKey version did you use?
0.96.0 Beta 10
### How did you install AutoKey?
_No response_
### Can you briefly describe the issue?
Man file does not list `--version` option.
Not sure if anything else is needed.
### Can the issue be reproduced?
_No response_
### What are the steps to reproduce the issue?
_No response_
### What should have happened?
_No response_
### What actually happened?
_No response_
### Do you have screenshots?
_No response_
### Can you provide the output of the AutoKey command?
_No response_
### Anything else?
Add --version option.
Check if anything else has changed. | 0easy
|
Title: Support scipy 1.10.x series
Body: ### Missing functionality
Declared support for scipy 1.10.x series
### Proposed feature
Relax the upper bound constraint to scipy<1.11
### Alternatives considered
_No response_
### Additional context
_No response_ | 0easy
|
Title: Add instructions for increasing autograde timeout in the FAQ
Body: cf https://github.com/jupyter/nbgrader/issues/923#issuecomment-366457887 | 0easy
|
Title: [Good First Issue] Add test cases for Truncated Gumbel
Body: Well, [Gumbel Distribution](https://numpy.org/doc/stable/reference/random/generated/numpy.random.gumbel.html) is *magical*. Basically, given a sequence of K logits, i.e., "\log a_1, \log a_2, ..., \log a_K" and K **independent** gumbel random variables, i.e., "g_1, g_2, ..., g_K". We have
```
\argmax_i \log a_i + g_i ~ Categorical({a_i / sum(a)})
```
This gives you a very simple way to sample from categorical distribution:
- Just draw a bunch of gumble variables with the same shape as the input logits. You may do that with `np.random.gumble()` as implemented in the source code:
https://github.com/dmlc/gluon-nlp/blob/32e87d4d4aa20a6eb658ee90d765ccffbd160571/src/gluonnlp/op.py#L187
- Add the gumbels to the logits
- Perform an argmax on the correct axis (i.e., the one that you will need to enumerate the classes).
Now, let's understand the meaning of *TruncatedGumbel*. The definition is that you need to sample from a distribution that has the same cumulative density function as Gumbel, but is **truncated** at the given threshold \tau, meaning that it will never exceed \tau.
Here, we implement the TruncatedGumbel via the inverse CDF technique, which is also used in [A* sampling](https://papers.nips.cc/paper/5449-a-sampling.pdf) (See footnote 1 on Page 3):
https://github.com/dmlc/gluon-nlp/blob/32e87d4d4aa20a6eb658ee90d765ccffbd160571/src/gluonnlp/op.py#L200-L228
The question is **how would you verify that the implementation is correct?** Basically, it relates to how you may conduct statistical testing and the author of this piece of codes has not implemented the test case well (he's not sure about how to do it well...):
https://github.com/dmlc/gluon-nlp/blob/32e87d4d4aa20a6eb658ee90d765ccffbd160571/tests/test_op.py#L106-L113
We will need your help and this will be a great contribution to GluonNLP!
Just FYI @xidulu @aashiqmuhamed @GomuGomuGo @liangliannie
| 0easy
|
Title: [FIX] logo does not display in Pypi
Body: https://pypi.org/project/feature-engine/
We need to adjust the source of the image in the readme file so that pypi can read it properly.
Either this link:
https://raw.githubusercontent.com/feature-engine/feature_engine/main/docs/images/logo/FeatureEngine.png
or the current link plus: `?raw=true` at the end | 0easy
|
Title: Link at main doc page is pointing to non-existent page
Body: The "Edit on GitHub" button on the top of [main doc](https://docs.pandas-ai.com/en/latest/) is pointing to a non-existent page

Easy fix, just find the link and replace it by pointing to the main branch.
| 0easy
|
Title: Include identifier in access tokens
Body: We should have some form of identifier in our access tokens so that they can be identified if they're committed to source control or something. | 0easy
|
Title: GitHub dependency with hashed pins fails on Windows for Python < 3.10
Body: ## Issue
After upgrading to tox 4, one of my GitHub-installed dependencies is failing, but only on Windows, and only for Python versions 3.7, 3.8, and 3.9. Other OS's, and Python 3.10 or 3.11 on Windows are successful.
The run with three OS's, and Windows failing, is here: https://github.com/nedbat/coveragepy/actions/runs/3772843444/jobs/6413996847
The failure is here: https://github.com/nedbat/coveragepy/actions/runs/3772843444/jobs/6413996895#step:6:569
```
Traceback (most recent call last):
File "C:\hostedtoolcache\windows\Python\3.8.10\x64\lib\site-packages\packaging\requirements.py", line 35, in __init__
parsed = parse_requirement(requirement_string)
File "C:\hostedtoolcache\windows\Python\3.8.10\x64\lib\site-packages\packaging\_parser.py", line 64, in parse_requirement
return _parse_requirement(Tokenizer(source, rules=DEFAULT_RULES))
File "C:\hostedtoolcache\windows\Python\3.8.10\x64\lib\site-packages\packaging\_parser.py", line 82, in _parse_requirement
url, specifier, marker = _parse_requirement_details(tokenizer)
File "C:\hostedtoolcache\windows\Python\3.8.10\x64\lib\site-packages\packaging\_parser.py", line 111, in _parse_requirement_details
marker = _parse_requirement_marker(
File "C:\hostedtoolcache\windows\Python\3.8.10\x64\lib\site-packages\packaging\_parser.py", line 143, in _parse_requirement_marker
tokenizer.raise_syntax_error(
File "C:\hostedtoolcache\windows\Python\3.8.10\x64\lib\site-packages\packaging\_tokenizer.py", line 161, in raise_syntax_error
raise ParserSyntaxError(
packaging._tokenizer.ParserSyntaxError: Expected end or semicolon (after URL and whitespace)
pycontracts @ https://github.com/slorg1/contracts/archive/c5a6da27d4dc9985f68e574d20d86000880919c3.zip
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\hostedtoolcache\windows\Python\3.8.10\x64\lib\site-packages\tox\tox_env\python\pip\req\file.py", line 37, in __init__
self._requirement: Requirement | Path | str = Requirement(req)
File "C:\hostedtoolcache\windows\Python\3.8.10\x64\lib\site-packages\packaging\requirements.py", line 37, in __init__
raise InvalidRequirement(str(e)) from e
packaging.requirements.InvalidRequirement: Expected end or semicolon (after URL and whitespace)
pycontracts @ https://github.com/slorg1/contracts/archive/c5a6da27d4dc9985f68e574d20d86000880919c3.zip
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\hostedtoolcache\windows\Python\3.8.10\x64\lib\site-packages\tox\session\cmd\run\single.py", line 45, in _evaluate
tox_env.setup()
File "C:\hostedtoolcache\windows\Python\3.8.10\x64\lib\site-packages\tox\tox_env\api.py", line 242, in setup
self._setup_env()
File "C:\hostedtoolcache\windows\Python\3.8.10\x64\lib\site-packages\tox\tox_env\python\runner.py", line 99, in _setup_env
self._install_deps()
File "C:\hostedtoolcache\windows\Python\3.8.10\x64\lib\site-packages\tox\tox_env\python\runner.py", line 103, in _install_deps
self._install(requirements_file, PythonRun.__name__, "deps")
File "C:\hostedtoolcache\windows\Python\3.8.10\x64\lib\site-packages\tox\tox_env\api.py", line 96, in _install
self.installer.install(arguments, section, of_type)
File "C:\hostedtoolcache\windows\Python\3.8.10\x64\lib\site-packages\tox\tox_env\python\pip\pip_install.py", line 83, in install
self._install_requirement_file(arguments, section, of_type)
File "C:\hostedtoolcache\windows\Python\3.8.10\x64\lib\site-packages\tox\tox_env\python\pip\pip_install.py", line 92, in _install_requirement_file
new_options, new_reqs = arguments.unroll()
File "C:\hostedtoolcache\windows\Python\3.8.10\x64\lib\site-packages\tox\tox_env\python\pip\req_file.py", line 104, in unroll
opts_dict = vars(self.options)
File "C:\hostedtoolcache\windows\Python\3.8.10\x64\lib\site-packages\tox\tox_env\python\pip\req\file.py", line 157, in options
self._ensure_requirements_parsed()
File "C:\hostedtoolcache\windows\Python\3.8.10\x64\lib\site-packages\tox\tox_env\python\pip\req\file.py", line 177, in _ensure_requirements_parsed
self._requirements = self._parse_requirements(opt=self._opt, recurse=True)
File "C:\hostedtoolcache\windows\Python\3.8.10\x64\lib\site-packages\tox\tox_env\python\pip\req_file.py", line 92, in _parse_requirements
requirements = super()._parse_requirements(opt, recurse)
File "C:\hostedtoolcache\windows\Python\3.8.10\x64\lib\site-packages\tox\tox_env\python\pip\req\file.py", line 183, in _parse_requirements
parsed_req = self._handle_requirement_line(parsed_line)
File "C:\hostedtoolcache\windows\Python\3.8.10\x64\lib\site-packages\tox\tox_env\python\pip\req\file.py", line 287, in _handle_requirement_line
return ParsedRequirement(line.requirement, req_options, line.filename, line.lineno)
File "C:\hostedtoolcache\windows\Python\3.8.10\x64\lib\site-packages\tox\tox_env\python\pip\req\file.py", line 61, in __init__
rel_path = str(path.resolve().relative_to(root))
File "C:\hostedtoolcache\windows\Python\3.8.10\x64\lib\pathlib.py", line 1181, in resolve
s = self._flavour.resolve(self, strict=strict)
File "C:\hostedtoolcache\windows\Python\3.8.10\x64\lib\pathlib.py", line 206, in resolve
s = self._ext_to_normal(_getfinalpathname(s))
OSError: [WinError 123] The filename, directory name, or volume label syntax is incorrect: 'D:\\a\\coveragepy\\coveragepy\\requirements\\pycontracts @ https:\\github.com\\slorg1\\contracts\\archive\\c5a6da27d4dc9985f68e574d20d86000880919c3.zip '
```
The dependency is in [dev.pip](https://github.com/nedbat/coveragepy/blob/nedbat/no-tox-gh-actions/requirements/dev.pip#L315-L316):
```
pycontracts @ https://github.com/slorg1/contracts/archive/c5a6da27d4dc9985f68e574d20d86000880919c3.zip \
--hash=sha256:2b889cbfb03b43dc811b5879248ac5c7e209ece78f03be9633de76a6b21a5a89
```
## Environment
Provide at least:
```
python -m pip freeze --all
shell: C:\Program Files\Git\bin\bash.EXE --noprofile --norc -e -o pipefail {0}
env:
PIP_DISABLE_PIP_VERSION_CHECK: 1
COVERAGE_IGOR_VERBOSE: 1
FORCE_COLOR: 1
pythonLocation: C:\hostedtoolcache\windows\Python\3.8.10\x64
PKG_CONFIG_PATH: C:\hostedtoolcache\windows\Python\3.8.10\x64/lib/pkgconfig
Python_ROOT_DIR: C:\hostedtoolcache\windows\Python\3.8.10\x64
Python2_ROOT_DIR: C:\hostedtoolcache\windows\Python\3.8.10\x64
Python3_ROOT_DIR: C:\hostedtoolcache\windows\Python\3.8.10\x64
cachetools==5.2.0
chardet==5.1.0
colorama==0.4.6
distlib==0.3.6
filelock==3.8.2
importlib-metadata==5.2.0
packaging==22.0
pip==22.3.1
platformdirs==2.6.0
pluggy==1.0.0
pyproject_api==1.2.1
setuptools==56.0.0
tomli==2.0.1
tox==4.0.16
typing_extensions==4.4.0
virtualenv==20.17.1
zipp==3.11.0
```
## Output of running tox
Provide the output of `tox -rvv`: see the action run link above.
## Minimal example
I tried to remove other factors (especially tox-gh-actions, which I think is fully removed from this run). | 0easy
|
Title: Create a runnable example notebook and test for Asserting Dataframe Manipulations
Body: Create the example found here: https://testbook.readthedocs.io/en/latest/examples/index.html#asserting-dataframe-manipulations | 0easy
|
Title: Optimise Spearman Correlations according to Spark Docs
Body: Overview : [Spark Development Strategy](https://github.com/pandas-profiling/pandas-profiling/wiki/Spark-Development-Plan)
Branch : spark-branch
Context :
Spearman correlations are a key part of pandas-profiling, and help elucidate rank based correlation statistics.
Problem :
Spark docs mentions that spearman correlations can be optimised through good use of caching due to requirements to do rank computations and RDD level manipulations. https://spark.apache.org/docs/3.1.1/api/python/reference/api/pyspark.ml.stat.Correlation.html
Solution :
We would need figure how to do the caching, and also measure out how fast the speed improvement is (to actually know if we have implemented the caching solution properly).
| 0easy
|
Title: Feature Request: update ALERTS section from JSON output to be as complete as the HTML output
Body: ### Missing functionality
The ALERTS section from the to_json() output gives only the types of errors and the column name (e.g. HIGH_CORRELATION, or ZEROS), but for example for the HIGH_CORRELATION error doesn't tell with which columns it is highly correlated to (although for this one specific I can obtain it from a different section of the JSON output). For the ZEROS warning the json output says '[ZEROS] alert on column XLarge Bags', but doesnt supply the information that it has 704 zero's (while the HTML output does actually show this). See the two images below to display the output difference:


### Proposed feature
A more extensive JSON output from the to_json() function. The ALERTS section should not only tell what alert/error was encountered, but also quantify the errors (e.g. not only display [ZEROS] alert in column X, but just as in the HTML output format: [ZEROS] there were 704 zero's encountered in column X.
### Alternatives considered
in the rest of the JSON string output details about the correlations can be obtained, but for the missing values (ZEROS), this is not possible as the JSON output is a barchart written out in HTML info.
### Additional context
shouldn't be too difficult to fix, as the information exists in the HTML report, but not in the to_json() JSON string | 0easy
|
Title: worker_id fixture has no docstring
Body: `pytest --fixtures` shows "no docstring available" | 0easy
|
Title: 用户界面没有登入功能
Body: 1. BUG反馈请描述最小复现步骤
代码运行的用户界面没有登入功能,不过demo链接是有的
2. 普通问题:99%的答案都在帮助文档里,请仔细阅读https://kmfaka.baklib-free.com/
已经打不开了
可能是时间长了。
| 0easy
|
Title: `items` of `ListBlock` in `StreamField` have same `id` as the `ListBlock` itself
Body: Hi,
I was just working with a `ListBlock` in a `StreamField` and noticed an error message in the developer console.
```
Warning: Encountered two children with the same key, `97c098cb-2ad1-445f-9ea7-36667656108c`. Keys should be unique so that components maintain their identity across updates. Non-unique keys may cause children to be duplicated and/or omitted — the behavior is unsupported and could change in a future version.
```
Running a query on the GraphQL endpoint reveals that all the `items` of `ListBlock` in `StreamField` have same `id` as the `ListBlock` itself (and thus all the `items` have the same `id`).
```graphql
query {
page(slug: "first-article") {
id
title
url
... on BlogPage {
freeformbody {
... on ListBlock {
id
items {
id
}
}
}
}
}
}
```
->
```json
{
"data": {
"page": {
"id": "5",
"title": "First Article",
"url": "/articles/first-article/",
"freeformbody": [
...
{
"id": "97c098cb-2ad1-445f-9ea7-36667656108c",
"items": [
{
"id": "97c098cb-2ad1-445f-9ea7-36667656108c"
},
{
"id": "97c098cb-2ad1-445f-9ea7-36667656108c"
},
{
"id": "97c098cb-2ad1-445f-9ea7-36667656108c"
}
]
},
...
]
}
}
}
```
Not sure what downstream error this might cause, but I guess the React warning will have it's reasons 😉 | 0easy
|
Title: 手机上架商品传完图片后 页面就无法上下移动,导致底部信息不能填写以及保存
Body: 1. BUG反馈请描述最小复现步骤
2. 普通问题:99%的答案都在帮助文档里,请仔细阅读https://kmfaka.baklib-free.com/
3. 新功能新概念提交:请文字描述或截图标注 | 0easy
|
Title: Quieter output on check-status
Body: **What enhancement would you like to see?**
I would like to disable all warnings when using --check-status together with --quiet (that is the behaviour prior to issue #1026 ). It can be implemented as another flag (--quieter or --no-warnings)
**What problem does it solve?**
I am using a script with some failing scenarios being handled by the return code and I prefer a cleaner output, without the warnings.
**Provide any additional information, screenshots, or code examples below:**
| 0easy
|
Title: How to use start to uwp application?
Body: I'm trying to automate the interaction with [WindowsTerminal](https://github.com/microsoft/terminal). It seems to be a UWP application, which I have no idea how to start such application directly using `Application().start`. I've tried to locate it using `Desktop`, but got quite confused..
Any help on this? | 0easy
|
Title: 后台管理自编发货的商品编辑修改会报错
Body: 后台管理->商品列表->编辑一个自动发货的商品->提交
只要是自动发货的商品都会编辑提交失败, 手动发货的则编辑可以成功
| 0easy
|
Title: Static Type Hinting: `dataprofiler/reports/graphs.py`
Body: **Is your feature request related to a problem? Please describe.**
Improve the project in the area of
- Make collaboration and code contribution easier
- Getting started with the code when a contributor is new to the code base
**Describe the outcome you'd like:**
Utilizing types from [mypy](https://mypy.readthedocs.io/en/stable/cheat_sheet_py3.html) documentation for python3, implement type hinting in `dataprofiler/reports/graphs.py`.
**Additional context:**
- [mypy](https://mypy.readthedocs.io/en/stable/cheat_sheet_py3.html) documentation
- [Why use python types?](https://medium.com/vacatronics/why-you-should-consider-python-type-hints-770e5cb1570f)
| 0easy
|
Title: `scanapi-report-example.png` not loading on PyPi description
Body: ## Description
At [PyPi page desccription](https://pypi.org/project/scanapi/0.0.15/) the image [scanapi-report-example.png](https://github.com/camilamaia/scanapi/blob/master/images/scanapi-report-example.png) is not loading

This happens because it is being used the relative path to link the image on README.md
How it is now:
```
<p align="center">
<img src="images/scanapi-report-example.png" width="700">
</p>
```
How it is should be:
```
<p align="center">
<img src="https://github.com/camilamaia/scanapi/blob/master/images/scanapi-report-example.png" width="700">
</p>
``` | 0easy
|
Title: Plot sector from PPI
Body: During the AMS Radar short course, one of the participants asked about the ability to take a PPI scan and plot a sector. This could "_easily_" be done by masking I would think (have to look at the structure a bit), but it could be a nice convenience function to add to the plotting library. | 0easy
|
Title: Media handler for `text/plain`
Body: As proposed by @maxking on Gitter:
> I have been trying to add support for msgpack in Mailman's API ([mailman/mailman!977](https://gitlab.com/mailman/mailman/-/merge_requests/977)) and in the process of doing that, I am trying to move off of doing the serialization in our code and setting the request.body/text to setting the `response.media` (in the actual resource) and `response.content_type` (before hand, in a middleware using `request.accept`) to let Falcon do the right thing. One thing I started noticing is that we were still returning some plain text and there is no default handler for `text/plain` response (presumably because it can be done with `response.text` or something). I was wondering if y'all would be open to adding a handler for text/plain, which basically works like identity? | 0easy
|
Title: Delete button does not work for scheduler screen beyond page 1
Body: The fix will be to call addCallbacks() either for all rows or whenever the table of schedules is modified in some way. | 0easy
|
Title: [New feature] Add apply_to_images to GaussNoise
Body: | 0easy
|
Title: [BUG] Series.drop_duplicates raised a `TypeError`
Body: <!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
Failed to execute `Series.drop_duplicates`.
``` Python
In [75]: a = md.DataFrame(np.random.rand(10, 2), columns=['a', 'b'], chunk_size=2)
In [76]: a['a'].drop_duplicates().execute()
0%| | 0/100 [00:00<?, ?it/s]Failed to run subtask l8o2G1V5iJMZVFK7USec2C0k on band numa-0
Traceback (most recent call last):
File "/Users/hekaisheng/Documents/mars_dev/mars/mars/services/scheduling/worker/execution.py", line 263, in internal_run_subtask
subtask, band_name, subtask_api, batch_quota_req)
File "/Users/hekaisheng/Documents/mars_dev/mars/mars/services/scheduling/worker/execution.py", line 340, in _retry_run_subtask
return await _retry_run(subtask, subtask_info, _run_subtask_once)
File "/Users/hekaisheng/Documents/mars_dev/mars/mars/services/scheduling/worker/execution.py", line 83, in _retry_run
raise ex
File "/Users/hekaisheng/Documents/mars_dev/mars/mars/services/scheduling/worker/execution.py", line 67, in _retry_run
return await target_async_func(*args)
File "/Users/hekaisheng/Documents/mars_dev/mars/mars/services/scheduling/worker/execution.py", line 301, in _run_subtask_once
return await asyncio.shield(aiotask)
File "/Users/hekaisheng/Documents/mars_dev/mars/mars/services/subtask/api.py", line 59, in run_subtask_in_slot
return await ref.run_subtask(subtask)
File "/Users/hekaisheng/Documents/mars_dev/mars/mars/oscar/backends/context.py", line 154, in send
return self._process_result_message(result)
File "/Users/hekaisheng/Documents/mars_dev/mars/mars/oscar/backends/context.py", line 59, in _process_result_message
raise message.error.with_traceback(message.traceback)
File "/Users/hekaisheng/Documents/mars_dev/mars/mars/oscar/backends/pool.py", line 496, in send
result = await future
File "/Users/hekaisheng/Documents/mars_dev/mars/mars/oscar/api.py", line 118, in __on_receive__
return await super().__on_receive__(message)
File "mars/oscar/core.pyx", line 351, in __on_receive__
raise ex
File "mars/oscar/core.pyx", line 345, in mars.oscar.core._BaseActor.__on_receive__
return await self._handle_actor_result(result)
File "mars/oscar/core.pyx", line 250, in _handle_actor_result
result = list(dones)[0].result()
File "mars/oscar/core.pyx", line 273, in mars.oscar.core._BaseActor._run_actor_async_generator
with debug_async_timeout('actor_lock_timeout',
File "mars/oscar/core.pyx", line 275, in mars.oscar.core._BaseActor._run_actor_async_generator
async with self._lock:
File "mars/oscar/core.pyx", line 279, in mars.oscar.core._BaseActor._run_actor_async_generator
res = await gen.athrow(*res)
File "/Users/hekaisheng/Documents/mars_dev/mars/mars/services/subtask/worker/runner.py", line 104, in run_subtask
result = yield self._running_processor.run(subtask)
File "mars/oscar/core.pyx", line 284, in mars.oscar.core._BaseActor._run_actor_async_generator
res = await self._handle_actor_result(res)
File "mars/oscar/core.pyx", line 219, in _handle_actor_result
result = await result
File "/Users/hekaisheng/Documents/mars_dev/mars/mars/oscar/backends/context.py", line 154, in send
return self._process_result_message(result)
File "/Users/hekaisheng/Documents/mars_dev/mars/mars/oscar/backends/context.py", line 59, in _process_result_message
raise message.error.with_traceback(message.traceback)
File "/Users/hekaisheng/Documents/mars_dev/mars/mars/oscar/backends/pool.py", line 496, in send
result = await future
File "/Users/hekaisheng/Documents/mars_dev/mars/mars/oscar/api.py", line 118, in __on_receive__
return await super().__on_receive__(message)
File "mars/oscar/core.pyx", line 351, in __on_receive__
raise ex
File "mars/oscar/core.pyx", line 345, in mars.oscar.core._BaseActor.__on_receive__
return await self._handle_actor_result(result)
File "mars/oscar/core.pyx", line 250, in _handle_actor_result
result = list(dones)[0].result()
File "mars/oscar/core.pyx", line 273, in mars.oscar.core._BaseActor._run_actor_async_generator
with debug_async_timeout('actor_lock_timeout',
File "mars/oscar/core.pyx", line 275, in mars.oscar.core._BaseActor._run_actor_async_generator
async with self._lock:
File "mars/oscar/core.pyx", line 279, in mars.oscar.core._BaseActor._run_actor_async_generator
res = await gen.athrow(*res)
File "/Users/hekaisheng/Documents/mars_dev/mars/mars/services/subtask/worker/processor.py", line 482, in run
result = yield self._running_aio_task
File "mars/oscar/core.pyx", line 284, in mars.oscar.core._BaseActor._run_actor_async_generator
res = await self._handle_actor_result(res)
File "mars/oscar/core.pyx", line 219, in _handle_actor_result
result = await result
File "/Users/hekaisheng/Documents/mars_dev/mars/mars/services/subtask/worker/processor.py", line 374, in run
stored_keys, store_sizes, memory_sizes, data_key_to_object_id = await self._store_data(chunk_graph)
File "/Users/hekaisheng/Documents/mars_dev/mars/mars/services/subtask/worker/processor.py", line 248, in _store_data
result_chunk.params = result_chunk.get_params_from_data(result_data)
File "/Users/hekaisheng/Documents/mars_dev/mars/mars/dataframe/core.py", line 1443, in get_params_from_data
value=data.dtypes)
File "/Users/hekaisheng/Documents/mars_dev/mars/mars/dataframe/core.py", line 355, in __init__
super().__init__(_key=key, _value=value, **kw)
File "/Users/hekaisheng/Documents/mars_dev/mars/mars/serialization/serializables/core.py", line 67, in __init__
object.__setattr__(self, key, val)
File "/Users/hekaisheng/Documents/mars_dev/mars/mars/serialization/serializables/field.py", line 106, in __set__
raise type(e)(f'Failed to set `{self._attr_name}`: {str(e)}')
TypeError: Failed to set `_value`: value needs to be instance of (<class 'pandas.core.series.Series'>,), got <class 'numpy.dtype[float64]'>
Subtask l8o2G1V5iJMZVFK7USec2C0k errored
Traceback (most recent call last):
File "/Users/hekaisheng/Documents/mars_dev/mars/mars/services/scheduling/worker/execution.py", line 263, in internal_run_subtask
subtask, band_name, subtask_api, batch_quota_req)
File "/Users/hekaisheng/Documents/mars_dev/mars/mars/services/scheduling/worker/execution.py", line 340, in _retry_run_subtask
return await _retry_run(subtask, subtask_info, _run_subtask_once)
File "/Users/hekaisheng/Documents/mars_dev/mars/mars/services/scheduling/worker/execution.py", line 83, in _retry_run
raise ex
File "/Users/hekaisheng/Documents/mars_dev/mars/mars/services/scheduling/worker/execution.py", line 67, in _retry_run
return await target_async_func(*args)
File "/Users/hekaisheng/Documents/mars_dev/mars/mars/services/scheduling/worker/execution.py", line 301, in _run_subtask_once
return await asyncio.shield(aiotask)
File "/Users/hekaisheng/Documents/mars_dev/mars/mars/services/subtask/api.py", line 59, in run_subtask_in_slot
return await ref.run_subtask(subtask)
File "/Users/hekaisheng/Documents/mars_dev/mars/mars/oscar/backends/context.py", line 154, in send
return self._process_result_message(result)
File "/Users/hekaisheng/Documents/mars_dev/mars/mars/oscar/backends/context.py", line 59, in _process_result_message
raise message.error.with_traceback(message.traceback)
File "/Users/hekaisheng/Documents/mars_dev/mars/mars/oscar/backends/pool.py", line 496, in send
result = await future
File "/Users/hekaisheng/Documents/mars_dev/mars/mars/oscar/api.py", line 118, in __on_receive__
return await super().__on_receive__(message)
File "mars/oscar/core.pyx", line 351, in __on_receive__
raise ex
File "mars/oscar/core.pyx", line 345, in mars.oscar.core._BaseActor.__on_receive__
return await self._handle_actor_result(result)
File "mars/oscar/core.pyx", line 250, in _handle_actor_result
result = list(dones)[0].result()
File "mars/oscar/core.pyx", line 273, in mars.oscar.core._BaseActor._run_actor_async_generator
with debug_async_timeout('actor_lock_timeout',
File "mars/oscar/core.pyx", line 275, in mars.oscar.core._BaseActor._run_actor_async_generator
async with self._lock:
File "mars/oscar/core.pyx", line 279, in mars.oscar.core._BaseActor._run_actor_async_generator
res = await gen.athrow(*res)
File "/Users/hekaisheng/Documents/mars_dev/mars/mars/services/subtask/worker/runner.py", line 104, in run_subtask
result = yield self._running_processor.run(subtask)
File "mars/oscar/core.pyx", line 284, in mars.oscar.core._BaseActor._run_actor_async_generator
res = await self._handle_actor_result(res)
File "mars/oscar/core.pyx", line 219, in _handle_actor_result
result = await result
File "/Users/hekaisheng/Documents/mars_dev/mars/mars/oscar/backends/context.py", line 154, in send
return self._process_result_message(result)
File "/Users/hekaisheng/Documents/mars_dev/mars/mars/oscar/backends/context.py", line 59, in _process_result_message
raise message.error.with_traceback(message.traceback)
File "/Users/hekaisheng/Documents/mars_dev/mars/mars/oscar/backends/pool.py", line 496, in send
result = await future
File "/Users/hekaisheng/Documents/mars_dev/mars/mars/oscar/api.py", line 118, in __on_receive__
return await super().__on_receive__(message)
File "mars/oscar/core.pyx", line 351, in __on_receive__
raise ex
File "mars/oscar/core.pyx", line 345, in mars.oscar.core._BaseActor.__on_receive__
return await self._handle_actor_result(result)
File "mars/oscar/core.pyx", line 250, in _handle_actor_result
result = list(dones)[0].result()
File "mars/oscar/core.pyx", line 273, in mars.oscar.core._BaseActor._run_actor_async_generator
with debug_async_timeout('actor_lock_timeout',
File "mars/oscar/core.pyx", line 275, in mars.oscar.core._BaseActor._run_actor_async_generator
async with self._lock:
File "mars/oscar/core.pyx", line 279, in mars.oscar.core._BaseActor._run_actor_async_generator
res = await gen.athrow(*res)
File "/Users/hekaisheng/Documents/mars_dev/mars/mars/services/subtask/worker/processor.py", line 482, in run
result = yield self._running_aio_task
File "mars/oscar/core.pyx", line 284, in mars.oscar.core._BaseActor._run_actor_async_generator
res = await self._handle_actor_result(res)
File "mars/oscar/core.pyx", line 219, in _handle_actor_result
result = await result
File "/Users/hekaisheng/Documents/mars_dev/mars/mars/services/subtask/worker/processor.py", line 374, in run
stored_keys, store_sizes, memory_sizes, data_key_to_object_id = await self._store_data(chunk_graph)
File "/Users/hekaisheng/Documents/mars_dev/mars/mars/services/subtask/worker/processor.py", line 248, in _store_data
result_chunk.params = result_chunk.get_params_from_data(result_data)
File "/Users/hekaisheng/Documents/mars_dev/mars/mars/dataframe/core.py", line 1443, in get_params_from_data
value=data.dtypes)
File "/Users/hekaisheng/Documents/mars_dev/mars/mars/dataframe/core.py", line 355, in __init__
super().__init__(_key=key, _value=value, **kw)
File "/Users/hekaisheng/Documents/mars_dev/mars/mars/serialization/serializables/core.py", line 67, in __init__
object.__setattr__(self, key, val)
File "/Users/hekaisheng/Documents/mars_dev/mars/mars/serialization/serializables/field.py", line 106, in __set__
raise type(e)(f'Failed to set `{self._attr_name}`: {str(e)}')
TypeError: Failed to set `_value`: value needs to be instance of (<class 'pandas.core.series.Series'>,), got <class 'numpy.dtype[float64]'>
```
| 0easy
|
Title: [Feature request] Add apply_to_images to D4
Body: | 0easy
|
Title: Styler object has no attribute 'map'
Body: See https://panel.holoviz.org/gallery/portfolio_analyzer.html

| 0easy
|
Title: Update third party tools to newer versions
Body: From #2820 (@tomaarsen)
The Stanford and MaltParser tools have had newer versions released than the ones used in `third-party.sh`. This may be as simple as modifying the download link to point to the new versions.
| 0easy
|
Title: Running solara app in a docker container with ipv6
Body: Hi guys,
I am not able to run my solara app in a docker container. Running the docker container results in the following error:
`ERROR: [Errno 99] error while attempting to bind on address ('::1', 8765, 0, 0): cannot assign requested address`
Here is what my Dockerfile looks like
```Dockerfile
#
# Build image
#
FROM python:3.11-slim as builder
ENV PYTHONFAULTHANDLER=1 \
PYTHONHASHSEED=random \
PYTHONUNBUFFERED=1 \
POETRY_HOME=/opt/poetry
WORKDIR /app
COPY . .
RUN apt update -y && apt upgrade -y && apt install curl -y
RUN curl -sSL https://install.python-poetry.org | python3 -
RUN ${POETRY_HOME}/bin/poetry config virtualenvs.create false
RUN ${POETRY_HOME}/bin/poetry install --no-dev
RUN ${POETRY_HOME}/bin/poetry export -f requirements.txt >> requirements.txt
#
# Prod image
#
FROM python:3.11-slim AS runtime
WORKDIR /app
COPY . .
COPY --from=builder /app/requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
EXPOSE 8765
CMD ["solara", "run", "src/app.py"]
``` | 0easy
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.